Test Report: Docker_Linux_crio_arm64 18063

                    
                      9a5d81419c51a6c3c4fef58cf8d1de8416716248:2024-02-29:33343
                    
                

Test fail (4/320)

Order failed test Duration
39 TestAddons/parallel/Ingress 168.07
89 TestFunctional/serial/ExtraConfig 37.03
90 TestFunctional/serial/ComponentHealth 2.71
171 TestIngressAddonLegacy/serial/ValidateIngressAddons 182.13
x
+
TestAddons/parallel/Ingress (168.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-847636 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-847636 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-847636 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [73a8f7aa-6339-44e2-b445-e951fd88c0ca] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [73a8f7aa-6339-44e2-b445-e951fd88c0ca] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003775228s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-847636 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-847636 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.482128621s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-847636 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-847636 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.077352709s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-847636 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-847636 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-847636 addons disable ingress --alsologtostderr -v=1: (7.729105761s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-847636
helpers_test.go:235: (dbg) docker inspect addons-847636:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0592fa3c8d3a326c40877f5b724973839712a7af67dff19d2fed49e8de7c30d3",
	        "Created": "2024-02-29T02:26:36.955477107Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1154904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T02:26:37.284587041Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/0592fa3c8d3a326c40877f5b724973839712a7af67dff19d2fed49e8de7c30d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0592fa3c8d3a326c40877f5b724973839712a7af67dff19d2fed49e8de7c30d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/0592fa3c8d3a326c40877f5b724973839712a7af67dff19d2fed49e8de7c30d3/hosts",
	        "LogPath": "/var/lib/docker/containers/0592fa3c8d3a326c40877f5b724973839712a7af67dff19d2fed49e8de7c30d3/0592fa3c8d3a326c40877f5b724973839712a7af67dff19d2fed49e8de7c30d3-json.log",
	        "Name": "/addons-847636",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-847636:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-847636",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/80848cc9026b05cf1f6672d0c30b63512f280d046e9001518cb09e69900ee58a-init/diff:/var/lib/docker/overlay2/330c2f3296cde464d6c1a52ceb432efd04754f92c402ca5b9f20e3ccc2c40d71/diff",
	                "MergedDir": "/var/lib/docker/overlay2/80848cc9026b05cf1f6672d0c30b63512f280d046e9001518cb09e69900ee58a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/80848cc9026b05cf1f6672d0c30b63512f280d046e9001518cb09e69900ee58a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/80848cc9026b05cf1f6672d0c30b63512f280d046e9001518cb09e69900ee58a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-847636",
	                "Source": "/var/lib/docker/volumes/addons-847636/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-847636",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-847636",
	                "name.minikube.sigs.k8s.io": "addons-847636",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5f06fac4d55185f2aa2c7e931e939394aa8ea81a2ebe5f373bef880a8c149d7b",
	            "SandboxKey": "/var/run/docker/netns/5f06fac4d551",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34037"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34036"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34033"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34035"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34034"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-847636": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0592fa3c8d3a",
	                        "addons-847636"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "75e4d3cbdb0222970403847b8e17a2213c2e122b4e9721944ed9e527a3db5116",
	                    "EndpointID": "a86626a58ec79095942a3c25111504905d950267a7f664ced33478c6cb28fa3a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-847636",
	                        "0592fa3c8d3a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-847636 -n addons-847636
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-847636 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-847636 logs -n 25: (1.465862006s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-096542                                                                     | download-only-096542   | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	| delete  | -p download-only-382285                                                                     | download-only-382285   | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	| delete  | -p download-only-400877                                                                     | download-only-400877   | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	| start   | --download-only -p                                                                          | download-docker-591946 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | download-docker-591946                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-591946                                                                   | download-docker-591946 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-495072   | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | binary-mirror-495072                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36415                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-495072                                                                     | binary-mirror-495072   | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	| addons  | enable dashboard -p                                                                         | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | addons-847636                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | addons-847636                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-847636 --wait=true                                                                | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:28 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-847636 ip                                                                            | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:28 UTC | 29 Feb 24 02:28 UTC |
	| addons  | addons-847636 addons disable                                                                | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:28 UTC | 29 Feb 24 02:28 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-847636 addons                                                                        | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:28 UTC | 29 Feb 24 02:28 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:29 UTC | 29 Feb 24 02:29 UTC |
	|         | addons-847636                                                                               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:29 UTC | 29 Feb 24 02:29 UTC |
	|         | -p addons-847636                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-847636 ssh cat                                                                       | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:29 UTC | 29 Feb 24 02:29 UTC |
	|         | /opt/local-path-provisioner/pvc-1ea50d5e-da40-42ee-8a27-66caf9ac73b4_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-847636 addons disable                                                                | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:29 UTC | 29 Feb 24 02:29 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:29 UTC | 29 Feb 24 02:29 UTC |
	|         | addons-847636                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:29 UTC | 29 Feb 24 02:29 UTC |
	|         | -p addons-847636                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-847636 addons                                                                        | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:29 UTC | 29 Feb 24 02:29 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-847636 addons                                                                        | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:29 UTC | 29 Feb 24 02:29 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-847636 ssh curl -s                                                                   | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:29 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-847636 ip                                                                            | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:32 UTC | 29 Feb 24 02:32 UTC |
	| addons  | addons-847636 addons disable                                                                | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:32 UTC | 29 Feb 24 02:32 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-847636 addons disable                                                                | addons-847636          | jenkins | v1.32.0 | 29 Feb 24 02:32 UTC | 29 Feb 24 02:32 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:26:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:26:12.760702 1154458 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:26:12.760849 1154458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:12.760860 1154458 out.go:304] Setting ErrFile to fd 2...
	I0229 02:26:12.760865 1154458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:12.761132 1154458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
	I0229 02:26:12.761554 1154458 out.go:298] Setting JSON to false
	I0229 02:26:12.762391 1154458 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22119,"bootTime":1709151454,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0229 02:26:12.762498 1154458 start.go:139] virtualization:  
	I0229 02:26:12.766690 1154458 out.go:177] * [addons-847636] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0229 02:26:12.768485 1154458 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:26:12.768573 1154458 notify.go:220] Checking for updates...
	I0229 02:26:12.772815 1154458 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:26:12.774928 1154458 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	I0229 02:26:12.776810 1154458 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	I0229 02:26:12.778812 1154458 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0229 02:26:12.780676 1154458 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:26:12.783021 1154458 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:26:12.807399 1154458 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0229 02:26:12.807521 1154458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:26:12.873656 1154458 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-29 02:26:12.864300185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:26:12.873766 1154458 docker.go:295] overlay module found
	I0229 02:26:12.877660 1154458 out.go:177] * Using the docker driver based on user configuration
	I0229 02:26:12.879724 1154458 start.go:299] selected driver: docker
	I0229 02:26:12.879742 1154458 start.go:903] validating driver "docker" against <nil>
	I0229 02:26:12.879755 1154458 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:26:12.880435 1154458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:26:12.936311 1154458 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-29 02:26:12.927349787 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:26:12.936469 1154458 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 02:26:12.936711 1154458 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:26:12.938915 1154458 out.go:177] * Using Docker driver with root privileges
	I0229 02:26:12.940963 1154458 cni.go:84] Creating CNI manager for ""
	I0229 02:26:12.940981 1154458 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0229 02:26:12.940992 1154458 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0229 02:26:12.941007 1154458 start_flags.go:323] config:
	{Name:addons-847636 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-847636 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:12.943487 1154458 out.go:177] * Starting control plane node addons-847636 in cluster addons-847636
	I0229 02:26:12.945407 1154458 cache.go:121] Beginning downloading kic base image for docker with crio
	I0229 02:26:12.947256 1154458 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 02:26:12.948816 1154458 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:26:12.948869 1154458 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0229 02:26:12.948880 1154458 cache.go:56] Caching tarball of preloaded images
	I0229 02:26:12.948919 1154458 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 02:26:12.948959 1154458 preload.go:174] Found /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0229 02:26:12.948969 1154458 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 02:26:12.949319 1154458 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/config.json ...
	I0229 02:26:12.949344 1154458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/config.json: {Name:mk0cce2da8f53e8309f705ef63b590dc79150d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:26:12.963605 1154458 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0229 02:26:12.963725 1154458 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0229 02:26:12.963744 1154458 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0229 02:26:12.963749 1154458 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0229 02:26:12.963756 1154458 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0229 02:26:12.963761 1154458 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from local cache
	I0229 02:26:28.775966 1154458 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from cached tarball
	I0229 02:26:28.776023 1154458 cache.go:194] Successfully downloaded all kic artifacts
	I0229 02:26:28.776055 1154458 start.go:365] acquiring machines lock for addons-847636: {Name:mk1e9b4e6106f6a8a7594173a4d65c3f86fce72d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:26:28.776953 1154458 start.go:369] acquired machines lock for "addons-847636" in 867.104µs
	I0229 02:26:28.777010 1154458 start.go:93] Provisioning new machine with config: &{Name:addons-847636 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-847636 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:26:28.777114 1154458 start.go:125] createHost starting for "" (driver="docker")
	I0229 02:26:28.779486 1154458 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0229 02:26:28.779733 1154458 start.go:159] libmachine.API.Create for "addons-847636" (driver="docker")
	I0229 02:26:28.779767 1154458 client.go:168] LocalClient.Create starting
	I0229 02:26:28.779882 1154458 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem
	I0229 02:26:29.851067 1154458 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/cert.pem
	I0229 02:26:30.648775 1154458 cli_runner.go:164] Run: docker network inspect addons-847636 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 02:26:30.666878 1154458 cli_runner.go:211] docker network inspect addons-847636 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 02:26:30.666972 1154458 network_create.go:281] running [docker network inspect addons-847636] to gather additional debugging logs...
	I0229 02:26:30.666996 1154458 cli_runner.go:164] Run: docker network inspect addons-847636
	W0229 02:26:30.681675 1154458 cli_runner.go:211] docker network inspect addons-847636 returned with exit code 1
	I0229 02:26:30.681711 1154458 network_create.go:284] error running [docker network inspect addons-847636]: docker network inspect addons-847636: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-847636 not found
	I0229 02:26:30.681732 1154458 network_create.go:286] output of [docker network inspect addons-847636]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-847636 not found
	
	** /stderr **
	I0229 02:26:30.681832 1154458 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 02:26:30.696904 1154458 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400253dba0}
	I0229 02:26:30.696949 1154458 network_create.go:124] attempt to create docker network addons-847636 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0229 02:26:30.697006 1154458 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-847636 addons-847636
	I0229 02:26:30.757238 1154458 network_create.go:108] docker network addons-847636 192.168.49.0/24 created
	I0229 02:26:30.757272 1154458 kic.go:121] calculated static IP "192.168.49.2" for the "addons-847636" container
	I0229 02:26:30.757347 1154458 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 02:26:30.771903 1154458 cli_runner.go:164] Run: docker volume create addons-847636 --label name.minikube.sigs.k8s.io=addons-847636 --label created_by.minikube.sigs.k8s.io=true
	I0229 02:26:30.787432 1154458 oci.go:103] Successfully created a docker volume addons-847636
	I0229 02:26:30.787537 1154458 cli_runner.go:164] Run: docker run --rm --name addons-847636-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-847636 --entrypoint /usr/bin/test -v addons-847636:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 02:26:32.705295 1154458 cli_runner.go:217] Completed: docker run --rm --name addons-847636-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-847636 --entrypoint /usr/bin/test -v addons-847636:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (1.917718472s)
	I0229 02:26:32.705327 1154458 oci.go:107] Successfully prepared a docker volume addons-847636
	I0229 02:26:32.705348 1154458 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:26:32.705367 1154458 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 02:26:32.705471 1154458 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-847636:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0229 02:26:36.858079 1154458 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-847636:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (4.152567462s)
	I0229 02:26:36.858115 1154458 kic.go:203] duration metric: took 4.152744 seconds to extract preloaded images to volume
	W0229 02:26:36.858277 1154458 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0229 02:26:36.858377 1154458 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0229 02:26:36.937542 1154458 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-847636 --name addons-847636 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-847636 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-847636 --network addons-847636 --ip 192.168.49.2 --volume addons-847636:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0229 02:26:37.292369 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Running}}
	I0229 02:26:37.320370 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:26:37.341865 1154458 cli_runner.go:164] Run: docker exec addons-847636 stat /var/lib/dpkg/alternatives/iptables
	I0229 02:26:37.406062 1154458 oci.go:144] the created container "addons-847636" has a running status.
	I0229 02:26:37.406090 1154458 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa...
	I0229 02:26:38.383041 1154458 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0229 02:26:38.405870 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:26:38.422081 1154458 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0229 02:26:38.422101 1154458 kic_runner.go:114] Args: [docker exec --privileged addons-847636 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0229 02:26:38.477447 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:26:38.492832 1154458 machine.go:88] provisioning docker machine ...
	I0229 02:26:38.492865 1154458 ubuntu.go:169] provisioning hostname "addons-847636"
	I0229 02:26:38.492933 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:26:38.509152 1154458 main.go:141] libmachine: Using SSH client type: native
	I0229 02:26:38.509427 1154458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 34037 <nil> <nil>}
	I0229 02:26:38.509445 1154458 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-847636 && echo "addons-847636" | sudo tee /etc/hostname
	I0229 02:26:38.648689 1154458 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-847636
	
	I0229 02:26:38.648775 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:26:38.665104 1154458 main.go:141] libmachine: Using SSH client type: native
	I0229 02:26:38.665362 1154458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 34037 <nil> <nil>}
	I0229 02:26:38.665386 1154458 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-847636' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-847636/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-847636' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:26:38.791856 1154458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:26:38.791884 1154458 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18063-1148303/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-1148303/.minikube}
	I0229 02:26:38.791904 1154458 ubuntu.go:177] setting up certificates
	I0229 02:26:38.791914 1154458 provision.go:83] configureAuth start
	I0229 02:26:38.792021 1154458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-847636
	I0229 02:26:38.808201 1154458 provision.go:138] copyHostCerts
	I0229 02:26:38.808285 1154458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-1148303/.minikube/key.pem (1675 bytes)
	I0229 02:26:38.808416 1154458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.pem (1082 bytes)
	I0229 02:26:38.808502 1154458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-1148303/.minikube/cert.pem (1123 bytes)
	I0229 02:26:38.808574 1154458 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca-key.pem org=jenkins.addons-847636 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-847636]
	I0229 02:26:39.225469 1154458 provision.go:172] copyRemoteCerts
	I0229 02:26:39.225547 1154458 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:26:39.225594 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:26:39.246670 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:26:39.340600 1154458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:26:39.363661 1154458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0229 02:26:39.386746 1154458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 02:26:39.410017 1154458 provision.go:86] duration metric: configureAuth took 618.090291ms
	I0229 02:26:39.410045 1154458 ubuntu.go:193] setting minikube options for container-runtime
	I0229 02:26:39.410236 1154458 config.go:182] Loaded profile config "addons-847636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:26:39.410367 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:26:39.428094 1154458 main.go:141] libmachine: Using SSH client type: native
	I0229 02:26:39.428331 1154458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 34037 <nil> <nil>}
	I0229 02:26:39.428351 1154458 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:26:39.652249 1154458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:26:39.652274 1154458 machine.go:91] provisioned docker machine in 1.159419959s
	I0229 02:26:39.652285 1154458 client.go:171] LocalClient.Create took 10.872510621s
	I0229 02:26:39.652340 1154458 start.go:167] duration metric: libmachine.API.Create for "addons-847636" took 10.872594091s
	I0229 02:26:39.652355 1154458 start.go:300] post-start starting for "addons-847636" (driver="docker")
	I0229 02:26:39.652384 1154458 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:26:39.652472 1154458 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:26:39.652547 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:26:39.669655 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:26:39.760959 1154458 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:26:39.763863 1154458 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0229 02:26:39.763899 1154458 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0229 02:26:39.763910 1154458 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0229 02:26:39.763917 1154458 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0229 02:26:39.763927 1154458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-1148303/.minikube/addons for local assets ...
	I0229 02:26:39.764019 1154458 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-1148303/.minikube/files for local assets ...
	I0229 02:26:39.764046 1154458 start.go:303] post-start completed in 111.685218ms
	I0229 02:26:39.764368 1154458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-847636
	I0229 02:26:39.779360 1154458 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/config.json ...
	I0229 02:26:39.779638 1154458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 02:26:39.779690 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:26:39.794315 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:26:39.884759 1154458 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 02:26:39.888897 1154458 start.go:128] duration metric: createHost completed in 11.111764449s
	I0229 02:26:39.888922 1154458 start.go:83] releasing machines lock for "addons-847636", held for 11.111939857s
	I0229 02:26:39.888993 1154458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-847636
	I0229 02:26:39.906056 1154458 ssh_runner.go:195] Run: cat /version.json
	I0229 02:26:39.906087 1154458 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:26:39.906113 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:26:39.906165 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:26:39.922982 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:26:39.933522 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:26:40.129537 1154458 ssh_runner.go:195] Run: systemctl --version
	I0229 02:26:40.133863 1154458 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:26:40.275956 1154458 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 02:26:40.280136 1154458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:26:40.299791 1154458 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0229 02:26:40.299905 1154458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:26:40.334294 1154458 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0229 02:26:40.334317 1154458 start.go:475] detecting cgroup driver to use...
	I0229 02:26:40.334364 1154458 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 02:26:40.334442 1154458 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:26:40.350704 1154458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:26:40.361943 1154458 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:26:40.362059 1154458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:26:40.376386 1154458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:26:40.390457 1154458 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:26:40.483800 1154458 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:26:40.580168 1154458 docker.go:233] disabling docker service ...
	I0229 02:26:40.580240 1154458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:26:40.601661 1154458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:26:40.614850 1154458 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:26:40.704433 1154458 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:26:40.804990 1154458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:26:40.816356 1154458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:26:40.833318 1154458 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:26:40.833406 1154458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:26:40.843122 1154458 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:26:40.843225 1154458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:26:40.852970 1154458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:26:40.863051 1154458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:26:40.873430 1154458 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:26:40.882003 1154458 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:26:40.890221 1154458 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:26:40.898714 1154458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:26:40.989472 1154458 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:26:41.105780 1154458 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:26:41.105875 1154458 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:26:41.112179 1154458 start.go:543] Will wait 60s for crictl version
	I0229 02:26:41.112293 1154458 ssh_runner.go:195] Run: which crictl
	I0229 02:26:41.115678 1154458 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:26:41.155507 1154458 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0229 02:26:41.155619 1154458 ssh_runner.go:195] Run: crio --version
	I0229 02:26:41.195362 1154458 ssh_runner.go:195] Run: crio --version
	I0229 02:26:41.235343 1154458 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0229 02:26:41.237282 1154458 cli_runner.go:164] Run: docker network inspect addons-847636 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 02:26:41.252412 1154458 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0229 02:26:41.255922 1154458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:26:41.266478 1154458 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:26:41.266556 1154458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:26:41.334649 1154458 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:26:41.334675 1154458 crio.go:415] Images already preloaded, skipping extraction
	I0229 02:26:41.334731 1154458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:26:41.370781 1154458 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:26:41.370806 1154458 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:26:41.370888 1154458 ssh_runner.go:195] Run: crio config
	I0229 02:26:41.436260 1154458 cni.go:84] Creating CNI manager for ""
	I0229 02:26:41.436286 1154458 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0229 02:26:41.436312 1154458 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:26:41.436333 1154458 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-847636 NodeName:addons-847636 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:26:41.436499 1154458 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-847636"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:26:41.436600 1154458 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-847636 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-847636 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:26:41.436682 1154458 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:26:41.445719 1154458 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:26:41.445820 1154458 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:26:41.454823 1154458 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0229 02:26:41.472584 1154458 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:26:41.490820 1154458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0229 02:26:41.508845 1154458 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0229 02:26:41.512331 1154458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:26:41.523053 1154458 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636 for IP: 192.168.49.2
	I0229 02:26:41.523087 1154458 certs.go:190] acquiring lock for shared ca certs: {Name:mk629bf08f2bf9bf9dfe188d027237a0e3bc8e94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:26:41.523725 1154458 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.key
	I0229 02:26:41.691648 1154458 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.crt ...
	I0229 02:26:41.691678 1154458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.crt: {Name:mkb879b69bfa8811fd279001636e8ebfda14298a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:26:41.691873 1154458 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.key ...
	I0229 02:26:41.691887 1154458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.key: {Name:mke211ff4aec60d15910dec780dc1b822f5453dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:26:41.692001 1154458 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.key
	I0229 02:26:42.017237 1154458 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.crt ...
	I0229 02:26:42.017272 1154458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.crt: {Name:mk194f93effbcceb914f75b91defaba2232a6b71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:26:42.017464 1154458 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.key ...
	I0229 02:26:42.017485 1154458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.key: {Name:mk5bca22b9257b4c649539291101f08e33480dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:26:42.018149 1154458 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.key
	I0229 02:26:42.018174 1154458 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt with IP's: []
	I0229 02:26:42.568209 1154458 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt ...
	I0229 02:26:42.568241 1154458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: {Name:mk990938593fffce4134df4346107911d4da9826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:26:42.568914 1154458 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.key ...
	I0229 02:26:42.568930 1154458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.key: {Name:mk5750c58bd006e4b116ca0db03c8c9b0480b4ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:26:42.569657 1154458 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/apiserver.key.dd3b5fb2
	I0229 02:26:42.569682 1154458 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 02:26:42.888966 1154458 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/apiserver.crt.dd3b5fb2 ...
	I0229 02:26:42.889000 1154458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/apiserver.crt.dd3b5fb2: {Name:mk7939bc7876192c4ee7c4e31dd0f0f041013961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:26:42.889193 1154458 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/apiserver.key.dd3b5fb2 ...
	I0229 02:26:42.889211 1154458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/apiserver.key.dd3b5fb2: {Name:mkcd2aa7b2d223f4c9f8bbf3b98aae7f1e8302c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:26:42.889302 1154458 certs.go:337] copying /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/apiserver.crt
	I0229 02:26:42.889380 1154458 certs.go:341] copying /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/apiserver.key
	I0229 02:26:42.889434 1154458 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/proxy-client.key
	I0229 02:26:42.889456 1154458 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/proxy-client.crt with IP's: []
	I0229 02:26:43.175575 1154458 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/proxy-client.crt ...
	I0229 02:26:43.175606 1154458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/proxy-client.crt: {Name:mk9dea1497474d6bc5a2582187332bc7de4a774c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:26:43.175800 1154458 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/proxy-client.key ...
	I0229 02:26:43.175816 1154458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/proxy-client.key: {Name:mk2d34a13e2c836dc2eadda0312b267e814315ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:26:43.176070 1154458 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 02:26:43.176112 1154458 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:26:43.176139 1154458 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:26:43.176172 1154458 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/key.pem (1675 bytes)
	I0229 02:26:43.176814 1154458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:26:43.201335 1154458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:26:43.225414 1154458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:26:43.249793 1154458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 02:26:43.272981 1154458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:26:43.296208 1154458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:26:43.319594 1154458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:26:43.342676 1154458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 02:26:43.365689 1154458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:26:43.388639 1154458 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:26:43.405520 1154458 ssh_runner.go:195] Run: openssl version
	I0229 02:26:43.410937 1154458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:26:43.420627 1154458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:26:43.424289 1154458 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:26:43.424359 1154458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:26:43.431814 1154458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:26:43.441116 1154458 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:26:43.444301 1154458 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:26:43.444355 1154458 kubeadm.go:404] StartCluster: {Name:addons-847636 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-847636 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:43.444446 1154458 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:26:43.444505 1154458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:26:43.484556 1154458 cri.go:89] found id: ""
	I0229 02:26:43.484626 1154458 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:26:43.493151 1154458 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:26:43.501578 1154458 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0229 02:26:43.501669 1154458 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:26:43.510310 1154458 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:26:43.510356 1154458 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0229 02:26:43.558169 1154458 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:26:43.558597 1154458 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:26:43.598914 1154458 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0229 02:26:43.599055 1154458 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1055-aws
	I0229 02:26:43.599113 1154458 kubeadm.go:322] OS: Linux
	I0229 02:26:43.599185 1154458 kubeadm.go:322] CGROUPS_CPU: enabled
	I0229 02:26:43.599263 1154458 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0229 02:26:43.599341 1154458 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0229 02:26:43.599419 1154458 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0229 02:26:43.599496 1154458 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0229 02:26:43.599573 1154458 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0229 02:26:43.599642 1154458 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0229 02:26:43.599719 1154458 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0229 02:26:43.599790 1154458 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0229 02:26:43.674102 1154458 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:26:43.674224 1154458 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:26:43.674322 1154458 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:26:43.895401 1154458 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:26:43.898547 1154458 out.go:204]   - Generating certificates and keys ...
	I0229 02:26:43.898642 1154458 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:26:43.898712 1154458 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:26:44.132668 1154458 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 02:26:44.323687 1154458 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 02:26:44.479360 1154458 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 02:26:44.948518 1154458 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 02:26:45.515180 1154458 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 02:26:45.515322 1154458 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-847636 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0229 02:26:46.467263 1154458 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 02:26:46.467415 1154458 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-847636 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0229 02:26:46.990393 1154458 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 02:26:47.284874 1154458 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 02:26:47.555263 1154458 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 02:26:47.555530 1154458 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:26:47.810792 1154458 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:26:48.454566 1154458 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:26:49.328297 1154458 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:26:49.900264 1154458 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:26:49.900883 1154458 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:26:49.904995 1154458 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:26:49.910938 1154458 out.go:204]   - Booting up control plane ...
	I0229 02:26:49.911058 1154458 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:26:49.911156 1154458 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:26:49.911241 1154458 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:26:49.919442 1154458 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:26:49.920684 1154458 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:26:49.920883 1154458 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:26:50.021633 1154458 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:26:58.024786 1154458 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002879 seconds
	I0229 02:26:58.024915 1154458 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:26:58.046251 1154458 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:26:58.579818 1154458 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:26:58.580022 1154458 kubeadm.go:322] [mark-control-plane] Marking the node addons-847636 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:26:59.091437 1154458 kubeadm.go:322] [bootstrap-token] Using token: 0u0s9n.5kpi0cazn1zodshq
	I0229 02:26:59.093480 1154458 out.go:204]   - Configuring RBAC rules ...
	I0229 02:26:59.093597 1154458 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:26:59.099808 1154458 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:26:59.108447 1154458 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:26:59.112414 1154458 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:26:59.116239 1154458 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:26:59.120928 1154458 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:26:59.134092 1154458 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:26:59.349276 1154458 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:26:59.504888 1154458 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:26:59.505856 1154458 kubeadm.go:322] 
	I0229 02:26:59.505924 1154458 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:26:59.505929 1154458 kubeadm.go:322] 
	I0229 02:26:59.506007 1154458 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:26:59.506012 1154458 kubeadm.go:322] 
	I0229 02:26:59.506036 1154458 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:26:59.506092 1154458 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:26:59.506141 1154458 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:26:59.506145 1154458 kubeadm.go:322] 
	I0229 02:26:59.506197 1154458 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:26:59.506201 1154458 kubeadm.go:322] 
	I0229 02:26:59.506246 1154458 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:26:59.506251 1154458 kubeadm.go:322] 
	I0229 02:26:59.506301 1154458 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:26:59.506373 1154458 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:26:59.506438 1154458 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:26:59.506442 1154458 kubeadm.go:322] 
	I0229 02:26:59.506522 1154458 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:26:59.506595 1154458 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:26:59.506599 1154458 kubeadm.go:322] 
	I0229 02:26:59.506679 1154458 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0u0s9n.5kpi0cazn1zodshq \
	I0229 02:26:59.506778 1154458 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0eed3bb06de93eaacfde26833aa0934eb72e0c80231d6eec065ff79fcf497e29 \
	I0229 02:26:59.506798 1154458 kubeadm.go:322] 	--control-plane 
	I0229 02:26:59.506806 1154458 kubeadm.go:322] 
	I0229 02:26:59.506887 1154458 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:26:59.506892 1154458 kubeadm.go:322] 
	I0229 02:26:59.508612 1154458 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0u0s9n.5kpi0cazn1zodshq \
	I0229 02:26:59.508794 1154458 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0eed3bb06de93eaacfde26833aa0934eb72e0c80231d6eec065ff79fcf497e29 
	I0229 02:26:59.510557 1154458 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0229 02:26:59.510664 1154458 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:26:59.510680 1154458 cni.go:84] Creating CNI manager for ""
	I0229 02:26:59.510687 1154458 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0229 02:26:59.514694 1154458 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0229 02:26:59.516550 1154458 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 02:26:59.529245 1154458 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 02:26:59.529264 1154458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 02:26:59.570053 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 02:27:00.566741 1154458 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:27:00.566872 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:00.566945 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=addons-847636 minikube.k8s.io/updated_at=2024_02_29T02_27_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:00.755271 1154458 ops.go:34] apiserver oom_adj: -16
	I0229 02:27:00.755353 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:01.255927 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:01.755946 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:02.255453 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:02.755814 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:03.256062 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:03.756380 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:04.256003 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:04.755464 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:05.256094 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:05.755744 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:06.256180 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:06.756055 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:07.255591 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:07.755967 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:08.255517 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:08.755488 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:09.256181 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:09.756359 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:10.255701 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:10.755735 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:11.255757 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:11.755442 1154458 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:27:11.872317 1154458 kubeadm.go:1088] duration metric: took 11.305492886s to wait for elevateKubeSystemPrivileges.
	I0229 02:27:11.872349 1154458 kubeadm.go:406] StartCluster complete in 28.427998296s
	I0229 02:27:11.872374 1154458 settings.go:142] acquiring lock: {Name:mk749db1aa854bc5a32d1a0b4d36b81f911e799c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:27:11.872495 1154458 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-1148303/kubeconfig
	I0229 02:27:11.872868 1154458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/kubeconfig: {Name:mka2c9192ec48968c9ed900867eac085a9478c66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:27:11.875629 1154458 config.go:182] Loaded profile config "addons-847636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:27:11.875689 1154458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:27:11.875853 1154458 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0229 02:27:11.875964 1154458 addons.go:69] Setting yakd=true in profile "addons-847636"
	I0229 02:27:11.876002 1154458 addons.go:234] Setting addon yakd=true in "addons-847636"
	I0229 02:27:11.876048 1154458 host.go:66] Checking if "addons-847636" exists ...
	I0229 02:27:11.876182 1154458 addons.go:69] Setting cloud-spanner=true in profile "addons-847636"
	I0229 02:27:11.876199 1154458 addons.go:234] Setting addon cloud-spanner=true in "addons-847636"
	I0229 02:27:11.876223 1154458 host.go:66] Checking if "addons-847636" exists ...
	I0229 02:27:11.876739 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:11.877083 1154458 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-847636"
	I0229 02:27:11.877121 1154458 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-847636"
	I0229 02:27:11.877156 1154458 host.go:66] Checking if "addons-847636" exists ...
	I0229 02:27:11.877532 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:11.878156 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:11.878499 1154458 addons.go:69] Setting default-storageclass=true in profile "addons-847636"
	I0229 02:27:11.878524 1154458 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-847636"
	I0229 02:27:11.878768 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:11.884143 1154458 addons.go:69] Setting gcp-auth=true in profile "addons-847636"
	I0229 02:27:11.884182 1154458 mustload.go:65] Loading cluster: addons-847636
	I0229 02:27:11.884378 1154458 config.go:182] Loaded profile config "addons-847636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:27:11.884638 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:11.887698 1154458 addons.go:69] Setting registry=true in profile "addons-847636"
	I0229 02:27:11.887724 1154458 addons.go:234] Setting addon registry=true in "addons-847636"
	I0229 02:27:11.887777 1154458 host.go:66] Checking if "addons-847636" exists ...
	I0229 02:27:11.888291 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:11.898648 1154458 addons.go:69] Setting ingress=true in profile "addons-847636"
	I0229 02:27:11.898698 1154458 addons.go:234] Setting addon ingress=true in "addons-847636"
	I0229 02:27:11.898755 1154458 host.go:66] Checking if "addons-847636" exists ...
	I0229 02:27:11.899226 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:11.900026 1154458 addons.go:69] Setting storage-provisioner=true in profile "addons-847636"
	I0229 02:27:11.900056 1154458 addons.go:234] Setting addon storage-provisioner=true in "addons-847636"
	I0229 02:27:11.900096 1154458 host.go:66] Checking if "addons-847636" exists ...
	I0229 02:27:11.900597 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:11.926022 1154458 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-847636"
	I0229 02:27:11.926068 1154458 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-847636"
	I0229 02:27:11.926410 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:11.926573 1154458 addons.go:69] Setting ingress-dns=true in profile "addons-847636"
	I0229 02:27:11.926589 1154458 addons.go:234] Setting addon ingress-dns=true in "addons-847636"
	I0229 02:27:11.926658 1154458 host.go:66] Checking if "addons-847636" exists ...
	I0229 02:27:11.927093 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:11.946568 1154458 addons.go:69] Setting volumesnapshots=true in profile "addons-847636"
	I0229 02:27:11.946606 1154458 addons.go:234] Setting addon volumesnapshots=true in "addons-847636"
	I0229 02:27:11.946661 1154458 host.go:66] Checking if "addons-847636" exists ...
	I0229 02:27:11.947175 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:11.991194 1154458 addons.go:69] Setting inspektor-gadget=true in profile "addons-847636"
	I0229 02:27:11.993160 1154458 addons.go:234] Setting addon inspektor-gadget=true in "addons-847636"
	I0229 02:27:11.993318 1154458 host.go:66] Checking if "addons-847636" exists ...
	I0229 02:27:12.043138 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:12.033566 1154458 addons.go:69] Setting metrics-server=true in profile "addons-847636"
	I0229 02:27:12.033597 1154458 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-847636"
	I0229 02:27:12.073770 1154458 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-847636"
	I0229 02:27:12.073825 1154458 host.go:66] Checking if "addons-847636" exists ...
	I0229 02:27:12.074285 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:12.080805 1154458 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0229 02:27:12.086139 1154458 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0229 02:27:12.086213 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0229 02:27:12.086304 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:27:12.095290 1154458 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0229 02:27:12.093053 1154458 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0229 02:27:12.093078 1154458 addons.go:234] Setting addon metrics-server=true in "addons-847636"
	I0229 02:27:12.094116 1154458 addons.go:234] Setting addon default-storageclass=true in "addons-847636"
	I0229 02:27:12.096950 1154458 host.go:66] Checking if "addons-847636" exists ...
	I0229 02:27:12.098592 1154458 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0229 02:27:12.098605 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0229 02:27:12.098659 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:27:12.113304 1154458 out.go:177]   - Using image docker.io/registry:2.8.3
	I0229 02:27:12.111406 1154458 host.go:66] Checking if "addons-847636" exists ...
	I0229 02:27:12.111436 1154458 host.go:66] Checking if "addons-847636" exists ...
	I0229 02:27:12.115457 1154458 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.6
	I0229 02:27:12.116373 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:12.118603 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:12.120539 1154458 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0229 02:27:12.143151 1154458 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 02:27:12.149711 1154458 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 02:27:12.156471 1154458 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 02:27:12.156497 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0229 02:27:12.156569 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:27:12.152523 1154458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:27:12.152532 1154458 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0229 02:27:12.167959 1154458 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0229 02:27:12.164166 1154458 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-847636"
	I0229 02:27:12.164185 1154458 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:27:12.164260 1154458 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0229 02:27:12.170604 1154458 host.go:66] Checking if "addons-847636" exists ...
	I0229 02:27:12.170665 1154458 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0229 02:27:12.170684 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0229 02:27:12.172597 1154458 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0229 02:27:12.173132 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:12.173142 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0229 02:27:12.177725 1154458 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:27:12.177797 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:27:12.198167 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:27:12.200185 1154458 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0229 02:27:12.205210 1154458 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0229 02:27:12.207403 1154458 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0229 02:27:12.211230 1154458 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0229 02:27:12.213330 1154458 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0229 02:27:12.217371 1154458 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0229 02:27:12.219371 1154458 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0229 02:27:12.219391 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0229 02:27:12.219463 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:27:12.269695 1154458 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 02:27:12.269718 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0229 02:27:12.269786 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:27:12.218003 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:27:12.292320 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:27:12.303396 1154458 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0229 02:27:12.301641 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:27:12.305706 1154458 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:27:12.305724 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:27:12.305798 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:27:12.311298 1154458 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0229 02:27:12.313305 1154458 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0229 02:27:12.313326 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0229 02:27:12.313393 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:27:12.365467 1154458 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.4
	I0229 02:27:12.377621 1154458 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0229 02:27:12.377644 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0229 02:27:12.377724 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:27:12.428618 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:27:12.461850 1154458 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0229 02:27:12.465332 1154458 out.go:177]   - Using image docker.io/busybox:stable
	I0229 02:27:12.467439 1154458 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0229 02:27:12.467493 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0229 02:27:12.467576 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:27:12.477185 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:27:12.487100 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:27:12.503073 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:27:12.503859 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:27:12.517681 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:27:12.544959 1154458 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:27:12.544983 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:27:12.545047 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:27:12.566461 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:27:12.567558 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:27:12.569214 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:27:12.569803 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:27:12.592423 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	W0229 02:27:12.600228 1154458 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0229 02:27:12.600266 1154458 retry.go:31] will retry after 291.161349ms: ssh: handshake failed: EOF
	I0229 02:27:12.619152 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:27:12.677347 1154458 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0229 02:27:12.677383 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0229 02:27:12.796870 1154458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0229 02:27:12.813404 1154458 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-847636" context rescaled to 1 replicas
	I0229 02:27:12.813443 1154458 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:27:12.815550 1154458 out.go:177] * Verifying Kubernetes components...
	I0229 02:27:12.817469 1154458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:27:12.848922 1154458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 02:27:12.852963 1154458 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0229 02:27:12.852987 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0229 02:27:12.858202 1154458 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0229 02:27:12.858228 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0229 02:27:12.897000 1154458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0229 02:27:12.904798 1154458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:27:12.906568 1154458 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0229 02:27:12.906590 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0229 02:27:12.941910 1154458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 02:27:12.946455 1154458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:27:12.946477 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0229 02:27:12.956703 1154458 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0229 02:27:12.956730 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0229 02:27:13.009839 1154458 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0229 02:27:13.009867 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0229 02:27:13.033917 1154458 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0229 02:27:13.033943 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0229 02:27:13.035436 1154458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:27:13.044816 1154458 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0229 02:27:13.044839 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0229 02:27:13.078454 1154458 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0229 02:27:13.078480 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0229 02:27:13.090270 1154458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:27:13.090295 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:27:13.153884 1154458 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0229 02:27:13.153918 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0229 02:27:13.196871 1154458 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0229 02:27:13.196905 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0229 02:27:13.224779 1154458 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0229 02:27:13.224805 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0229 02:27:13.242831 1154458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0229 02:27:13.272849 1154458 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:27:13.272876 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:27:13.286345 1154458 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0229 02:27:13.286377 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0229 02:27:13.323425 1154458 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0229 02:27:13.323451 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0229 02:27:13.377139 1154458 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0229 02:27:13.377167 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0229 02:27:13.417279 1154458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0229 02:27:13.432973 1154458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0229 02:27:13.451320 1154458 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0229 02:27:13.451358 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0229 02:27:13.461173 1154458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:27:13.481901 1154458 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0229 02:27:13.481924 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0229 02:27:13.578799 1154458 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 02:27:13.578823 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0229 02:27:13.608010 1154458 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0229 02:27:13.608037 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0229 02:27:13.634573 1154458 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0229 02:27:13.634600 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0229 02:27:13.689347 1154458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 02:27:13.695322 1154458 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0229 02:27:13.695363 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0229 02:27:13.773224 1154458 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0229 02:27:13.773247 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0229 02:27:13.793069 1154458 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0229 02:27:13.793092 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0229 02:27:13.894253 1154458 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0229 02:27:13.894281 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0229 02:27:13.907915 1154458 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0229 02:27:13.907956 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0229 02:27:14.005655 1154458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0229 02:27:14.024127 1154458 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0229 02:27:14.024153 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0229 02:27:14.114571 1154458 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0229 02:27:14.114602 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0229 02:27:14.221573 1154458 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0229 02:27:14.221598 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0229 02:27:14.313437 1154458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0229 02:27:14.743192 1154458 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.578908315s)
	I0229 02:27:14.743233 1154458 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0229 02:27:16.917475 1154458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.120573006s)
	I0229 02:27:16.917568 1154458 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.100068412s)
	I0229 02:27:16.918646 1154458 node_ready.go:35] waiting up to 6m0s for node "addons-847636" to be "Ready" ...
	I0229 02:27:18.934497 1154458 node_ready.go:58] node "addons-847636" has status "Ready":"False"
	I0229 02:27:19.059744 1154458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.21078519s)
	I0229 02:27:19.059783 1154458 addons.go:470] Verifying addon ingress=true in "addons-847636"
	I0229 02:27:19.062066 1154458 out.go:177] * Verifying ingress addon...
	I0229 02:27:19.059939 1154458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.162912786s)
	I0229 02:27:19.059958 1154458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.155131601s)
	I0229 02:27:19.060094 1154458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.118141839s)
	I0229 02:27:19.060133 1154458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.02467007s)
	I0229 02:27:19.060184 1154458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.817328492s)
	I0229 02:27:19.060218 1154458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.642907996s)
	I0229 02:27:19.060268 1154458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.627274382s)
	I0229 02:27:19.060330 1154458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.599132242s)
	I0229 02:27:19.060414 1154458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.371042297s)
	I0229 02:27:19.060472 1154458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.054782535s)
	I0229 02:27:19.064916 1154458 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0229 02:27:19.065258 1154458 addons.go:470] Verifying addon registry=true in "addons-847636"
	I0229 02:27:19.067182 1154458 out.go:177] * Verifying registry addon...
	I0229 02:27:19.065650 1154458 addons.go:470] Verifying addon metrics-server=true in "addons-847636"
	W0229 02:27:19.065673 1154458 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0229 02:27:19.075064 1154458 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-847636 service yakd-dashboard -n yakd-dashboard
	
	I0229 02:27:19.070801 1154458 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0229 02:27:19.070822 1154458 retry.go:31] will retry after 340.703701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0229 02:27:19.075008 1154458 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0229 02:27:19.077731 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:19.086980 1154458 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0229 02:27:19.087005 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0229 02:27:19.087592 1154458 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0229 02:27:19.418667 1154458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 02:27:19.435776 1154458 node_ready.go:49] node "addons-847636" has status "Ready":"True"
	I0229 02:27:19.435808 1154458 node_ready.go:38] duration metric: took 2.517084552s waiting for node "addons-847636" to be "Ready" ...
	I0229 02:27:19.435820 1154458 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:27:19.590527 1154458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.277017731s)
	I0229 02:27:19.590565 1154458 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-847636"
	I0229 02:27:19.594408 1154458 out.go:177] * Verifying csi-hostpath-driver addon...
	I0229 02:27:19.596862 1154458 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0229 02:27:19.676959 1154458 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wkr6n" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:19.700825 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:19.714588 1154458 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0229 02:27:19.714613 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:19.729163 1154458 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0229 02:27:19.729188 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:20.289646 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:20.301120 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:20.302944 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:20.444510 1154458 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0229 02:27:20.444655 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:27:20.472653 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:27:20.653785 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:20.657545 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:20.663620 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:20.723947 1154458 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0229 02:27:20.820826 1154458 addons.go:234] Setting addon gcp-auth=true in "addons-847636"
	I0229 02:27:20.820927 1154458 host.go:66] Checking if "addons-847636" exists ...
	I0229 02:27:20.821435 1154458 cli_runner.go:164] Run: docker container inspect addons-847636 --format={{.State.Status}}
	I0229 02:27:20.847372 1154458 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0229 02:27:20.847428 1154458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-847636
	I0229 02:27:20.879752 1154458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34037 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/addons-847636/id_rsa Username:docker}
	I0229 02:27:21.130109 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:21.133234 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:21.136044 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:21.570565 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:21.583312 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:21.603870 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:21.687541 1154458 pod_ready.go:102] pod "coredns-5dd5756b68-wkr6n" in "kube-system" namespace has status "Ready":"False"
	I0229 02:27:22.085233 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:22.098363 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:22.113262 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:22.230437 1154458 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.383034537s)
	I0229 02:27:22.233874 1154458 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 02:27:22.230740 1154458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.812022722s)
	I0229 02:27:22.236415 1154458 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0229 02:27:22.238763 1154458 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0229 02:27:22.238818 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0229 02:27:22.262195 1154458 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0229 02:27:22.262222 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0229 02:27:22.314889 1154458 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0229 02:27:22.314916 1154458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5447 bytes)
	I0229 02:27:22.375199 1154458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0229 02:27:22.570760 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:22.591896 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:22.602824 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:23.079182 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:23.113215 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:23.114605 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:23.599871 1154458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.224632079s)
	I0229 02:27:23.602623 1154458 addons.go:470] Verifying addon gcp-auth=true in "addons-847636"
	I0229 02:27:23.605830 1154458 out.go:177] * Verifying gcp-auth addon...
	I0229 02:27:23.608499 1154458 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0229 02:27:23.611336 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:23.625798 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:23.644183 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:23.649001 1154458 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0229 02:27:23.649034 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:24.071142 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:24.084283 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:24.103381 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:24.114685 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:24.185036 1154458 pod_ready.go:92] pod "coredns-5dd5756b68-wkr6n" in "kube-system" namespace has status "Ready":"True"
	I0229 02:27:24.185061 1154458 pod_ready.go:81] duration metric: took 4.508067776s waiting for pod "coredns-5dd5756b68-wkr6n" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:24.185082 1154458 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-847636" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:24.196597 1154458 pod_ready.go:92] pod "etcd-addons-847636" in "kube-system" namespace has status "Ready":"True"
	I0229 02:27:24.196624 1154458 pod_ready.go:81] duration metric: took 11.52002ms waiting for pod "etcd-addons-847636" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:24.196640 1154458 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-847636" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:24.205101 1154458 pod_ready.go:92] pod "kube-apiserver-addons-847636" in "kube-system" namespace has status "Ready":"True"
	I0229 02:27:24.205126 1154458 pod_ready.go:81] duration metric: took 8.479124ms waiting for pod "kube-apiserver-addons-847636" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:24.205138 1154458 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-847636" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:24.213664 1154458 pod_ready.go:92] pod "kube-controller-manager-addons-847636" in "kube-system" namespace has status "Ready":"True"
	I0229 02:27:24.213694 1154458 pod_ready.go:81] duration metric: took 8.547924ms waiting for pod "kube-controller-manager-addons-847636" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:24.213708 1154458 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9lb2m" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:24.222178 1154458 pod_ready.go:92] pod "kube-proxy-9lb2m" in "kube-system" namespace has status "Ready":"True"
	I0229 02:27:24.222203 1154458 pod_ready.go:81] duration metric: took 8.488101ms waiting for pod "kube-proxy-9lb2m" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:24.222215 1154458 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-847636" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:24.569416 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:24.583724 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:24.584156 1154458 pod_ready.go:92] pod "kube-scheduler-addons-847636" in "kube-system" namespace has status "Ready":"True"
	I0229 02:27:24.584172 1154458 pod_ready.go:81] duration metric: took 361.94958ms waiting for pod "kube-scheduler-addons-847636" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:24.584191 1154458 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-xbb8t" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:24.603759 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:24.612008 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:25.071544 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:25.084795 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:25.123944 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:25.132253 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:25.571706 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:25.582080 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:25.603660 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:25.615242 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:26.070181 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:26.082994 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:26.102881 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:26.111977 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:26.574003 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:26.582721 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:26.609549 1154458 pod_ready.go:102] pod "metrics-server-69cf46c98-xbb8t" in "kube-system" namespace has status "Ready":"False"
	I0229 02:27:26.616253 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:26.619279 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:27.070287 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:27.084346 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:27.116752 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:27.119080 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:27.570283 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:27.583297 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:27.610329 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:27.624557 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:28.069890 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:28.083175 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:28.102850 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:28.112545 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:28.570137 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:28.591532 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:28.620889 1154458 pod_ready.go:102] pod "metrics-server-69cf46c98-xbb8t" in "kube-system" namespace has status "Ready":"False"
	I0229 02:27:28.625666 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:28.628635 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:29.070641 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:29.083078 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:29.116742 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:29.119504 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:29.570120 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:29.603836 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:29.628403 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:29.641244 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:30.070746 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:30.104990 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:30.123716 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:30.127642 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:30.571560 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:30.589395 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:30.607895 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:30.615455 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:31.076759 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:31.083672 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:31.107164 1154458 pod_ready.go:92] pod "metrics-server-69cf46c98-xbb8t" in "kube-system" namespace has status "Ready":"True"
	I0229 02:27:31.107192 1154458 pod_ready.go:81] duration metric: took 6.522987291s waiting for pod "metrics-server-69cf46c98-xbb8t" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:31.107205 1154458 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-m48gh" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:31.114062 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:31.127642 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:31.572368 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:31.589494 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:31.604817 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:31.617067 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:32.071788 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:32.085778 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:32.105712 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:32.114014 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:32.570593 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:32.583783 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:32.603446 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:32.614614 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:33.069880 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:33.083090 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:33.102705 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:33.113437 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:33.119680 1154458 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m48gh" in "kube-system" namespace has status "Ready":"False"
	I0229 02:27:33.571269 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:33.582584 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:33.602992 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:33.612566 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:34.069775 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:34.082476 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:34.103208 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:34.111618 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:34.570496 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:34.583334 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:34.603121 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:34.614355 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:35.071055 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:35.084885 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:35.108495 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:35.117176 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:35.125916 1154458 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m48gh" in "kube-system" namespace has status "Ready":"False"
	I0229 02:27:35.569540 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:35.583463 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:35.603340 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:35.614491 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:36.071710 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:36.098878 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:36.121836 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:36.142758 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:36.570181 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:36.583254 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:36.609908 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:36.618920 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:37.070987 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:37.083343 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:37.107156 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:37.113661 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:37.570914 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:37.582902 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:37.603158 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:37.612904 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:37.616773 1154458 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m48gh" in "kube-system" namespace has status "Ready":"False"
	I0229 02:27:38.076900 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:38.083335 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:38.104160 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:38.113952 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:38.569941 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:38.583344 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:38.603767 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:38.613830 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:39.070936 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:39.084259 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:39.108959 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:39.118384 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:39.569453 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:39.583002 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:39.603837 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:39.612161 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:40.078474 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:40.084387 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:40.106601 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:40.115501 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:40.119626 1154458 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m48gh" in "kube-system" namespace has status "Ready":"False"
	I0229 02:27:40.570807 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:40.585518 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:40.606322 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:40.631874 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:41.072877 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:41.086082 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:41.107748 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:41.113032 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:41.571177 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:41.587261 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:41.604137 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:41.612474 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:42.074720 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:42.086781 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:42.103507 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:42.135140 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:42.139625 1154458 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m48gh" in "kube-system" namespace has status "Ready":"False"
	I0229 02:27:42.570025 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:42.583115 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:42.612979 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:42.618862 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:43.071093 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:43.083734 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:43.105250 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:43.115229 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:43.569354 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:43.582945 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 02:27:43.602937 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:43.612080 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:44.086610 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:44.088801 1154458 kapi.go:107] duration metric: took 25.018002967s to wait for kubernetes.io/minikube-addons=registry ...
	I0229 02:27:44.105123 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:44.114525 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:44.570629 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:44.604994 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:44.617359 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:44.621199 1154458 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m48gh" in "kube-system" namespace has status "Ready":"False"
	I0229 02:27:45.084332 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:45.147219 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:45.152524 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:45.569624 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:45.602899 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:45.613840 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:46.070568 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:46.103340 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:46.112210 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:46.569739 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:46.603813 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:46.613946 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:47.069746 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:47.103486 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:47.121339 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:47.124136 1154458 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m48gh" in "kube-system" namespace has status "Ready":"False"
	I0229 02:27:47.569356 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:47.603539 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:47.613063 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:48.071838 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:48.104043 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:48.112363 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:48.577104 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:48.613202 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:48.641333 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:49.071890 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:49.104937 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:49.119319 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:49.570156 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:49.602447 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:49.612713 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:49.614508 1154458 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m48gh" in "kube-system" namespace has status "Ready":"False"
	I0229 02:27:50.072647 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:50.110655 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:50.116058 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:50.570502 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:50.603251 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:50.612273 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:51.072675 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:51.103655 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:51.112633 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:51.569879 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:51.609701 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:51.616541 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:51.623167 1154458 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m48gh" in "kube-system" namespace has status "Ready":"False"
	I0229 02:27:52.071008 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:52.103610 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:52.117228 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:52.570999 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:52.603560 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:52.614314 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:53.070563 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:53.104549 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:53.114828 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:53.569359 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:53.602928 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:53.613091 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:54.070752 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:54.104185 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:54.113107 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:54.117111 1154458 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-m48gh" in "kube-system" namespace has status "Ready":"False"
	I0229 02:27:54.570764 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:54.617021 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:54.617473 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:55.070606 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:55.103574 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:55.112218 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:55.569547 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:55.603051 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:55.614966 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:55.617222 1154458 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-m48gh" in "kube-system" namespace has status "Ready":"True"
	I0229 02:27:55.617287 1154458 pod_ready.go:81] duration metric: took 24.510073492s waiting for pod "nvidia-device-plugin-daemonset-m48gh" in "kube-system" namespace to be "Ready" ...
	I0229 02:27:55.617323 1154458 pod_ready.go:38] duration metric: took 36.181490744s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:27:55.617362 1154458 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:27:55.617455 1154458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:27:55.654233 1154458 api_server.go:72] duration metric: took 42.84075998s to wait for apiserver process to appear ...
	I0229 02:27:55.654254 1154458 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:27:55.654274 1154458 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0229 02:27:55.668865 1154458 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0229 02:27:55.670400 1154458 api_server.go:141] control plane version: v1.28.4
	I0229 02:27:55.670464 1154458 api_server.go:131] duration metric: took 16.202734ms to wait for apiserver health ...
	I0229 02:27:55.670488 1154458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:27:55.681379 1154458 system_pods.go:59] 18 kube-system pods found
	I0229 02:27:55.681451 1154458 system_pods.go:61] "coredns-5dd5756b68-wkr6n" [05c2d14f-60a4-4c46-99b6-ba4ed23bdabe] Running
	I0229 02:27:55.681473 1154458 system_pods.go:61] "csi-hostpath-attacher-0" [ccf7758f-9bfa-4d2d-a3cd-1d3483d990b1] Running
	I0229 02:27:55.681497 1154458 system_pods.go:61] "csi-hostpath-resizer-0" [60a21f80-6446-400e-a977-fe12ca471b9b] Running
	I0229 02:27:55.681527 1154458 system_pods.go:61] "csi-hostpathplugin-tkfw7" [cb91acc5-2421-4cd6-93ec-786b5fad8ad9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0229 02:27:55.681558 1154458 system_pods.go:61] "etcd-addons-847636" [4e951d65-09e8-4e3d-83e4-ae66796fc5e7] Running
	I0229 02:27:55.681581 1154458 system_pods.go:61] "kindnet-gvcb9" [854f2d0d-34e0-4aaf-b0bc-2a83aeb547b9] Running
	I0229 02:27:55.681606 1154458 system_pods.go:61] "kube-apiserver-addons-847636" [c9260b87-6a48-493b-9813-d091789e126d] Running
	I0229 02:27:55.681634 1154458 system_pods.go:61] "kube-controller-manager-addons-847636" [3f3375fb-ce90-433d-85ca-af9a50fbc100] Running
	I0229 02:27:55.681659 1154458 system_pods.go:61] "kube-ingress-dns-minikube" [7ae16640-9ae9-419c-80c8-be8130fa4f2a] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0229 02:27:55.681681 1154458 system_pods.go:61] "kube-proxy-9lb2m" [d00e4d21-2064-4d8a-b279-479d134db246] Running
	I0229 02:27:55.681712 1154458 system_pods.go:61] "kube-scheduler-addons-847636" [392df0f4-a41d-468d-a354-cd8072dc48f9] Running
	I0229 02:27:55.681733 1154458 system_pods.go:61] "metrics-server-69cf46c98-xbb8t" [402b0b1e-9934-4ad8-b735-7b56a9bdab20] Running
	I0229 02:27:55.681756 1154458 system_pods.go:61] "nvidia-device-plugin-daemonset-m48gh" [31959e7d-f3ce-4e2a-a4ce-26c05ed98a5f] Running
	I0229 02:27:55.681779 1154458 system_pods.go:61] "registry-proxy-28lz9" [5a37001f-bc0a-46ed-8c02-1be6d3b01226] Running
	I0229 02:27:55.681807 1154458 system_pods.go:61] "registry-swh5m" [7f7cef7c-c308-440d-9af0-a62ea6ff5afc] Running
	I0229 02:27:55.681833 1154458 system_pods.go:61] "snapshot-controller-58dbcc7b99-lvpbh" [823e929a-42f5-4065-a198-9c205629994f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 02:27:55.681856 1154458 system_pods.go:61] "snapshot-controller-58dbcc7b99-tpwb4" [580ef736-ed67-40ce-9cc7-9edc3095e703] Running
	I0229 02:27:55.681880 1154458 system_pods.go:61] "storage-provisioner" [2a87c790-bf88-4cbd-a270-8ec38343dbc9] Running
	I0229 02:27:55.681910 1154458 system_pods.go:74] duration metric: took 11.401654ms to wait for pod list to return data ...
	I0229 02:27:55.681935 1154458 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:27:55.690476 1154458 default_sa.go:45] found service account: "default"
	I0229 02:27:55.690548 1154458 default_sa.go:55] duration metric: took 8.590664ms for default service account to be created ...
	I0229 02:27:55.690573 1154458 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:27:55.701826 1154458 system_pods.go:86] 18 kube-system pods found
	I0229 02:27:55.701898 1154458 system_pods.go:89] "coredns-5dd5756b68-wkr6n" [05c2d14f-60a4-4c46-99b6-ba4ed23bdabe] Running
	I0229 02:27:55.701920 1154458 system_pods.go:89] "csi-hostpath-attacher-0" [ccf7758f-9bfa-4d2d-a3cd-1d3483d990b1] Running
	I0229 02:27:55.701949 1154458 system_pods.go:89] "csi-hostpath-resizer-0" [60a21f80-6446-400e-a977-fe12ca471b9b] Running
	I0229 02:27:55.701977 1154458 system_pods.go:89] "csi-hostpathplugin-tkfw7" [cb91acc5-2421-4cd6-93ec-786b5fad8ad9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0229 02:27:55.702001 1154458 system_pods.go:89] "etcd-addons-847636" [4e951d65-09e8-4e3d-83e4-ae66796fc5e7] Running
	I0229 02:27:55.702032 1154458 system_pods.go:89] "kindnet-gvcb9" [854f2d0d-34e0-4aaf-b0bc-2a83aeb547b9] Running
	I0229 02:27:55.702056 1154458 system_pods.go:89] "kube-apiserver-addons-847636" [c9260b87-6a48-493b-9813-d091789e126d] Running
	I0229 02:27:55.702080 1154458 system_pods.go:89] "kube-controller-manager-addons-847636" [3f3375fb-ce90-433d-85ca-af9a50fbc100] Running
	I0229 02:27:55.702109 1154458 system_pods.go:89] "kube-ingress-dns-minikube" [7ae16640-9ae9-419c-80c8-be8130fa4f2a] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0229 02:27:55.702134 1154458 system_pods.go:89] "kube-proxy-9lb2m" [d00e4d21-2064-4d8a-b279-479d134db246] Running
	I0229 02:27:55.702162 1154458 system_pods.go:89] "kube-scheduler-addons-847636" [392df0f4-a41d-468d-a354-cd8072dc48f9] Running
	I0229 02:27:55.702187 1154458 system_pods.go:89] "metrics-server-69cf46c98-xbb8t" [402b0b1e-9934-4ad8-b735-7b56a9bdab20] Running
	I0229 02:27:55.702209 1154458 system_pods.go:89] "nvidia-device-plugin-daemonset-m48gh" [31959e7d-f3ce-4e2a-a4ce-26c05ed98a5f] Running
	I0229 02:27:55.702232 1154458 system_pods.go:89] "registry-proxy-28lz9" [5a37001f-bc0a-46ed-8c02-1be6d3b01226] Running
	I0229 02:27:55.702256 1154458 system_pods.go:89] "registry-swh5m" [7f7cef7c-c308-440d-9af0-a62ea6ff5afc] Running
	I0229 02:27:55.702412 1154458 system_pods.go:89] "snapshot-controller-58dbcc7b99-lvpbh" [823e929a-42f5-4065-a198-9c205629994f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 02:27:55.702448 1154458 system_pods.go:89] "snapshot-controller-58dbcc7b99-tpwb4" [580ef736-ed67-40ce-9cc7-9edc3095e703] Running
	I0229 02:27:55.702470 1154458 system_pods.go:89] "storage-provisioner" [2a87c790-bf88-4cbd-a270-8ec38343dbc9] Running
	I0229 02:27:55.702495 1154458 system_pods.go:126] duration metric: took 11.902007ms to wait for k8s-apps to be running ...
	I0229 02:27:55.702524 1154458 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:27:55.702601 1154458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:27:55.739701 1154458 system_svc.go:56] duration metric: took 37.16892ms WaitForService to wait for kubelet.
	I0229 02:27:55.739770 1154458 kubeadm.go:581] duration metric: took 42.926298828s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:27:55.739807 1154458 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:27:55.749926 1154458 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0229 02:27:55.749999 1154458 node_conditions.go:123] node cpu capacity is 2
	I0229 02:27:55.750029 1154458 node_conditions.go:105] duration metric: took 10.197454ms to run NodePressure ...
	I0229 02:27:55.750059 1154458 start.go:228] waiting for startup goroutines ...
	I0229 02:27:56.070917 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:56.102884 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:56.112451 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:56.577308 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:56.616189 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:56.624822 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:57.069322 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:57.103374 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:57.113305 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:57.573388 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:57.604241 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:57.613227 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:58.073193 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:58.104101 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:58.113027 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:58.570310 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:58.604506 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:58.612367 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:59.071493 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:59.103365 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:59.112524 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:27:59.569896 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:27:59.603626 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:27:59.613173 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:00.109755 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:00.149132 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:00.180182 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:00.569935 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:00.603020 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:00.613077 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:01.070159 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:01.102650 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:01.112612 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:01.570387 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:01.603582 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:01.612151 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:02.071125 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:02.104876 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:02.114000 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:02.572464 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:02.603009 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:02.617100 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:03.071459 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:03.105017 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:03.113397 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:03.570229 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:03.603511 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:03.612253 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:04.070807 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:04.102668 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:04.112439 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:04.570236 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:04.612312 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:04.616641 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:05.070276 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:05.103331 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:05.112875 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:05.570104 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:05.602879 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:05.618050 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:06.070494 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:06.103530 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:06.114298 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:06.570680 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:06.604390 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:06.612811 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:07.070134 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:07.103126 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:07.115836 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:07.573195 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:07.603223 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:07.628749 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:08.071143 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:08.105906 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:08.113100 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:08.569787 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:08.602182 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:08.613001 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:09.070454 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:09.103082 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:09.112374 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:09.572549 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:09.603871 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:09.612319 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:10.070406 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:10.104625 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:10.112561 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:10.586566 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:10.628934 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:10.629318 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:11.070258 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:11.115124 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:11.119117 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:11.569469 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:11.603030 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:11.612594 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:12.070431 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:12.103301 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:12.112967 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:12.569654 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:12.604017 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:12.613578 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:13.070588 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:13.105144 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:13.113488 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:13.569456 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:13.602935 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:13.612552 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:14.070516 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:14.103104 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:14.112470 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:14.574165 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:14.605255 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:14.612615 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:15.069758 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:15.105159 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:15.113594 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:15.571258 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:15.602818 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:15.612176 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:16.070425 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:16.104039 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:16.115032 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:16.569472 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:16.603467 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:16.613723 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:17.070078 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:17.102507 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:17.112118 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:17.569966 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:17.602618 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:17.613915 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:18.075620 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:18.104976 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:18.113604 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:18.569915 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:18.603293 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:18.613363 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:19.070343 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:19.103138 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 02:28:19.112758 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:19.569683 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:19.603296 1154458 kapi.go:107] duration metric: took 1m0.006431034s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0229 02:28:19.612745 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:20.070325 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:20.113424 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:20.569427 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:20.612030 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:21.070539 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:21.112714 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:21.569765 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:21.612403 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:22.069782 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:22.112662 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:22.570094 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:22.613021 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:23.070458 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:23.111897 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:23.569347 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:23.612849 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:24.071195 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:24.112770 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:24.571488 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:24.612153 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:25.070465 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:25.112196 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:25.572267 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:25.613051 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:26.072240 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:26.113317 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:26.569364 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:26.613721 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:27.069541 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:27.112458 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:27.570653 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:27.613785 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:28.070471 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:28.112267 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:28.570306 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:28.613258 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:29.081151 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:29.115591 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:29.570396 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:29.613348 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:30.073202 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:30.115537 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:30.570302 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:30.613302 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:31.070907 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:31.112937 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:31.570145 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:31.612765 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:32.075781 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:32.114205 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:32.570072 1154458 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 02:28:32.619962 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:33.070200 1154458 kapi.go:107] duration metric: took 1m14.005290682s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0229 02:28:33.113001 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:33.628834 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:34.114136 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:34.614784 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:35.113182 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:35.612196 1154458 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 02:28:36.112722 1154458 kapi.go:107] duration metric: took 1m12.504222059s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0229 02:28:36.114433 1154458 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-847636 cluster.
	I0229 02:28:36.116207 1154458 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0229 02:28:36.117845 1154458 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0229 02:28:36.119759 1154458 out.go:177] * Enabled addons: cloud-spanner, inspektor-gadget, nvidia-device-plugin, ingress-dns, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0229 02:28:36.121277 1154458 addons.go:505] enable addons completed in 1m24.245418273s: enabled=[cloud-spanner inspektor-gadget nvidia-device-plugin ingress-dns storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0229 02:28:36.121329 1154458 start.go:233] waiting for cluster config update ...
	I0229 02:28:36.121348 1154458 start.go:242] writing updated cluster config ...
	I0229 02:28:36.121683 1154458 ssh_runner.go:195] Run: rm -f paused
	I0229 02:28:36.449318 1154458 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:28:36.451881 1154458 out.go:177] * Done! kubectl is now configured to use "addons-847636" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 02:32:25 addons-847636 crio[905]: time="2024-02-29 02:32:25.467306673Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=d60d1935-62a5-4d2f-899b-2fde27241835 name=/runtime.v1.ImageService/ImageStatus
	Feb 29 02:32:25 addons-847636 crio[905]: time="2024-02-29 02:32:25.468343819Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=edd50a7e-0a23-490b-8c1a-39145a881ccb name=/runtime.v1.ImageService/ImageStatus
	Feb 29 02:32:25 addons-847636 crio[905]: time="2024-02-29 02:32:25.468542078Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=edd50a7e-0a23-490b-8c1a-39145a881ccb name=/runtime.v1.ImageService/ImageStatus
	Feb 29 02:32:25 addons-847636 crio[905]: time="2024-02-29 02:32:25.469268259Z" level=info msg="Creating container: default/hello-world-app-5d77478584-xt9m7/hello-world-app" id=c2e1d1ac-ad8d-4fe1-adc1-d6eb233c6ae1 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 29 02:32:25 addons-847636 crio[905]: time="2024-02-29 02:32:25.469360574Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 29 02:32:25 addons-847636 crio[905]: time="2024-02-29 02:32:25.535613049Z" level=info msg="Created container 2274c8d8b9a656b556f21906689cf76550ade6ff07a4e1d8c4715493e8f48166: default/hello-world-app-5d77478584-xt9m7/hello-world-app" id=c2e1d1ac-ad8d-4fe1-adc1-d6eb233c6ae1 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 29 02:32:25 addons-847636 crio[905]: time="2024-02-29 02:32:25.536580328Z" level=info msg="Starting container: 2274c8d8b9a656b556f21906689cf76550ade6ff07a4e1d8c4715493e8f48166" id=688b6eea-2e5c-47de-8b1f-9b0fc5ddaa96 name=/runtime.v1.RuntimeService/StartContainer
	Feb 29 02:32:25 addons-847636 crio[905]: time="2024-02-29 02:32:25.544312980Z" level=info msg="Started container" PID=7822 containerID=2274c8d8b9a656b556f21906689cf76550ade6ff07a4e1d8c4715493e8f48166 description=default/hello-world-app-5d77478584-xt9m7/hello-world-app id=688b6eea-2e5c-47de-8b1f-9b0fc5ddaa96 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b110631ee077547db7f84159ed46f3a1f423fa08035fd53ff9633e79dda44adc
	Feb 29 02:32:25 addons-847636 conmon[7810]: conmon 2274c8d8b9a656b556f2 <ninfo>: container 7822 exited with status 1
	Feb 29 02:32:26 addons-847636 crio[905]: time="2024-02-29 02:32:26.073904390Z" level=warning msg="Stopping container 4471d3b6f8826a44036ce087dd6acf435d41ee11a5f99a1b4e4f6e29f33b058f with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=af152b10-d5df-4ae3-830c-b08142dce9e8 name=/runtime.v1.RuntimeService/StopContainer
	Feb 29 02:32:26 addons-847636 conmon[4841]: conmon 4471d3b6f8826a44036c <ninfo>: container 4853 exited with status 137
	Feb 29 02:32:26 addons-847636 crio[905]: time="2024-02-29 02:32:26.209403581Z" level=info msg="Stopped container 4471d3b6f8826a44036ce087dd6acf435d41ee11a5f99a1b4e4f6e29f33b058f: ingress-nginx/ingress-nginx-controller-7967645744-pqwmg/controller" id=af152b10-d5df-4ae3-830c-b08142dce9e8 name=/runtime.v1.RuntimeService/StopContainer
	Feb 29 02:32:26 addons-847636 crio[905]: time="2024-02-29 02:32:26.209917891Z" level=info msg="Stopping pod sandbox: a96208994db4e4ea43d32dc261187b14747e688ccdda0def6fd878ed3173e4db" id=6144e9f4-1cb2-40dc-9ed8-877c0b17aae0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 29 02:32:26 addons-847636 crio[905]: time="2024-02-29 02:32:26.213173974Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-FHITPCWEWLZHYKIQ - [0:0]\n:KUBE-HP-T2JODANYQQBWAZDU - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-T2JODANYQQBWAZDU\n-X KUBE-HP-FHITPCWEWLZHYKIQ\nCOMMIT\n"
	Feb 29 02:32:26 addons-847636 crio[905]: time="2024-02-29 02:32:26.214522165Z" level=info msg="Closing host port tcp:80"
	Feb 29 02:32:26 addons-847636 crio[905]: time="2024-02-29 02:32:26.214570558Z" level=info msg="Closing host port tcp:443"
	Feb 29 02:32:26 addons-847636 crio[905]: time="2024-02-29 02:32:26.215821183Z" level=info msg="Host port tcp:80 does not have an open socket"
	Feb 29 02:32:26 addons-847636 crio[905]: time="2024-02-29 02:32:26.215846126Z" level=info msg="Host port tcp:443 does not have an open socket"
	Feb 29 02:32:26 addons-847636 crio[905]: time="2024-02-29 02:32:26.216314652Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7967645744-pqwmg Namespace:ingress-nginx ID:a96208994db4e4ea43d32dc261187b14747e688ccdda0def6fd878ed3173e4db UID:1309e9a0-804a-4d1a-82c9-39daf26ceb32 NetNS:/var/run/netns/4142cca8-f2c2-4c32-a4fe-643e7447b2fd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 29 02:32:26 addons-847636 crio[905]: time="2024-02-29 02:32:26.216467447Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7967645744-pqwmg from CNI network \"kindnet\" (type=ptp)"
	Feb 29 02:32:26 addons-847636 crio[905]: time="2024-02-29 02:32:26.233564248Z" level=info msg="Stopped pod sandbox: a96208994db4e4ea43d32dc261187b14747e688ccdda0def6fd878ed3173e4db" id=6144e9f4-1cb2-40dc-9ed8-877c0b17aae0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 29 02:32:26 addons-847636 crio[905]: time="2024-02-29 02:32:26.383827802Z" level=info msg="Removing container: 4471d3b6f8826a44036ce087dd6acf435d41ee11a5f99a1b4e4f6e29f33b058f" id=a7d2e63a-7a47-4d38-9bed-d09c4b5484bd name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 29 02:32:26 addons-847636 crio[905]: time="2024-02-29 02:32:26.413020431Z" level=info msg="Removed container 4471d3b6f8826a44036ce087dd6acf435d41ee11a5f99a1b4e4f6e29f33b058f: ingress-nginx/ingress-nginx-controller-7967645744-pqwmg/controller" id=a7d2e63a-7a47-4d38-9bed-d09c4b5484bd name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 29 02:32:26 addons-847636 crio[905]: time="2024-02-29 02:32:26.414725518Z" level=info msg="Removing container: 1101c930aa07aefb3d4edda8d94e563c29135f6101771609d3a11c41320ff7c2" id=4258cf05-bbd9-4a88-a160-5585041d352e name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 29 02:32:26 addons-847636 crio[905]: time="2024-02-29 02:32:26.434285579Z" level=info msg="Removed container 1101c930aa07aefb3d4edda8d94e563c29135f6101771609d3a11c41320ff7c2: default/hello-world-app-5d77478584-xt9m7/hello-world-app" id=4258cf05-bbd9-4a88-a160-5585041d352e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2274c8d8b9a65       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             5 seconds ago       Exited              hello-world-app           2                   b110631ee0775       hello-world-app-5d77478584-xt9m7
	1a9e8da3c3763       docker.io/library/nginx@sha256:34aa0a372d3220dc0448131f809c72d8085f79bdec8058ad6970fc034a395674                              2 minutes ago       Running             nginx                     0                   84554d4af8d6f       nginx
	ab23d52563c03       ghcr.io/headlamp-k8s/headlamp@sha256:0fe50c48c186b89ff3d341dba427174d8232a64c3062af5de854a3a7cb2105ce                        2 minutes ago       Running             headlamp                  0                   32fbca0d7ab0c       headlamp-7ddfbb94ff-97qj6
	84bd50ad0c4f0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                 3 minutes ago       Running             gcp-auth                  0                   8a42bf5c9e357       gcp-auth-5f6b4f85fd-krfn5
	db9c5b5306cdf       f8c5dfd0ede5fc09af68292793e1622682ffc7336e487c703a1341e6248f1bdd                                                             4 minutes ago       Exited              patch                     1                   a864f21a238a1       ingress-nginx-admission-patch-fr7md
	cac3c90916c78       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:25d6a5f11211cc5c3f9f2bf552b585374af287b4debf693cacbe2da47daa5084   4 minutes ago       Exited              create                    0                   1ea880c842915       ingress-nginx-admission-create-rdmgq
	57b8a8d201786       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             4 minutes ago       Running             local-path-provisioner    0                   7d0b937e07ff6       local-path-provisioner-78b46b4d5c-x5qj5
	91403266bddcb       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   c4c4802f2845d       yakd-dashboard-9947fc6bf-grgbx
	7eeb871a3830b       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             5 minutes ago       Running             coredns                   0                   1e6d0fca06cc8       coredns-5dd5756b68-wkr6n
	5b6ba67b37067       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner       0                   9f04d44bc8292       storage-provisioner
	93651cd6108ba       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988                           5 minutes ago       Running             kindnet-cni               0                   a1624abeacbc2       kindnet-gvcb9
	b2d04fc89f8cc       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                                             5 minutes ago       Running             kube-proxy                0                   0285a0bc6611f       kube-proxy-9lb2m
	c44467f60f283       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                                             5 minutes ago       Running             kube-scheduler            0                   3000b9c41a2e3       kube-scheduler-addons-847636
	e86b22466daf2       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                                             5 minutes ago       Running             kube-apiserver            0                   e60a759d07349       kube-apiserver-addons-847636
	1fdce7de4e02b       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                                             5 minutes ago       Running             kube-controller-manager   0                   7009f7e6b53e7       kube-controller-manager-addons-847636
	e94f7542acff1       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             5 minutes ago       Running             etcd                      0                   1deb883cbf62d       etcd-addons-847636
	
	
	==> coredns [7eeb871a3830bfa6eaaa97dacc172f52758996ad390db82a813adaba22be970e] <==
	[INFO] 10.244.0.19:51374 - 46553 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000613s
	[INFO] 10.244.0.19:51374 - 55229 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001147552s
	[INFO] 10.244.0.19:37306 - 4058 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002285759s
	[INFO] 10.244.0.19:51374 - 34613 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001846542s
	[INFO] 10.244.0.19:37306 - 51458 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002215918s
	[INFO] 10.244.0.19:37306 - 48517 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000121287s
	[INFO] 10.244.0.19:51374 - 42313 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000194788s
	[INFO] 10.244.0.19:53287 - 62601 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00010203s
	[INFO] 10.244.0.19:58819 - 16243 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000040451s
	[INFO] 10.244.0.19:53287 - 20149 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000073419s
	[INFO] 10.244.0.19:58819 - 28787 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000058682s
	[INFO] 10.244.0.19:53287 - 17887 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059593s
	[INFO] 10.244.0.19:58819 - 39835 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038514s
	[INFO] 10.244.0.19:53287 - 52922 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061685s
	[INFO] 10.244.0.19:58819 - 9607 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039458s
	[INFO] 10.244.0.19:53287 - 43576 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052832s
	[INFO] 10.244.0.19:58819 - 52223 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039294s
	[INFO] 10.244.0.19:53287 - 10912 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060627s
	[INFO] 10.244.0.19:58819 - 46580 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043478s
	[INFO] 10.244.0.19:53287 - 55908 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001203281s
	[INFO] 10.244.0.19:58819 - 24548 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001379149s
	[INFO] 10.244.0.19:53287 - 51713 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001213062s
	[INFO] 10.244.0.19:53287 - 43274 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074715s
	[INFO] 10.244.0.19:58819 - 62778 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00126523s
	[INFO] 10.244.0.19:58819 - 18323 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000213308s
	
	
	==> describe nodes <==
	Name:               addons-847636
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-847636
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=addons-847636
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T02_27_00_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-847636
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:26:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-847636
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:32:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:30:03 +0000   Thu, 29 Feb 2024 02:26:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:30:03 +0000   Thu, 29 Feb 2024 02:26:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:30:03 +0000   Thu, 29 Feb 2024 02:26:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:30:03 +0000   Thu, 29 Feb 2024 02:27:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-847636
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 489db9ea8dc64bc89aa5e19928737db2
	  System UUID:                26f97ae9-adda-4dcd-be1c-105381fb0d0e
	  Boot ID:                    d15cd6b5-a0a6-45f5-95b2-2521c5763941
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-xt9m7           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  gcp-auth                    gcp-auth-5f6b4f85fd-krfn5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  headlamp                    headlamp-7ddfbb94ff-97qj6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 coredns-5dd5756b68-wkr6n                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m18s
	  kube-system                 etcd-addons-847636                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m32s
	  kube-system                 kindnet-gvcb9                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m20s
	  kube-system                 kube-apiserver-addons-847636               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	  kube-system                 kube-controller-manager-addons-847636      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  kube-system                 kube-proxy-9lb2m                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-scheduler-addons-847636               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  local-path-storage          local-path-provisioner-78b46b4d5c-x5qj5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-grgbx             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m12s                  kube-proxy       
	  Normal  Starting                 5m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m40s (x8 over 5m40s)  kubelet          Node addons-847636 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m40s (x8 over 5m40s)  kubelet          Node addons-847636 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m40s (x8 over 5m40s)  kubelet          Node addons-847636 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m32s                  kubelet          Node addons-847636 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m32s                  kubelet          Node addons-847636 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m32s                  kubelet          Node addons-847636 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m20s                  node-controller  Node addons-847636 event: Registered Node addons-847636 in Controller
	  Normal  NodeReady                5m12s                  kubelet          Node addons-847636 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001066] FS-Cache: O-key=[8] '3a3e5c0100000000'
	[  +0.000719] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000964] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=000000008300a153
	[  +0.001170] FS-Cache: N-key=[8] '3a3e5c0100000000'
	[  +0.002926] FS-Cache: Duplicate cookie detected
	[  +0.000709] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001140] FS-Cache: O-cookie d=000000007d8e8356{9p.inode} n=00000000aff28ed8
	[  +0.001147] FS-Cache: O-key=[8] '3a3e5c0100000000'
	[  +0.000713] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000968] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=0000000058168a0d
	[  +0.001153] FS-Cache: N-key=[8] '3a3e5c0100000000'
	[  +2.636492] FS-Cache: Duplicate cookie detected
	[  +0.000790] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001146] FS-Cache: O-cookie d=000000007d8e8356{9p.inode} n=00000000d8c7b9c4
	[  +0.001157] FS-Cache: O-key=[8] '393e5c0100000000'
	[  +0.000747] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=000000008300a153
	[  +0.001083] FS-Cache: N-key=[8] '393e5c0100000000'
	[  +0.299075] FS-Cache: Duplicate cookie detected
	[  +0.000712] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001037] FS-Cache: O-cookie d=000000007d8e8356{9p.inode} n=00000000aa687443
	[  +0.001052] FS-Cache: O-key=[8] '3f3e5c0100000000'
	[  +0.000728] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=00000000f8ac0df1
	[  +0.001102] FS-Cache: N-key=[8] '3f3e5c0100000000'
	
	
	==> etcd [e94f7542acff19ec0b6444969c36cd8935eaa7b5b1221438574f919c2b2dff80] <==
	{"level":"info","ts":"2024-02-29T02:26:52.447283Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T02:26:52.447585Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T02:26:52.450048Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T02:26:52.450232Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T02:26:52.694101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-29T02:26:52.694154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-29T02:26:52.69417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-02-29T02:26:52.694192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-02-29T02:26:52.694199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-29T02:26:52.694209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-02-29T02:26:52.694217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-29T02:26:52.695257Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:26:52.700156Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-847636 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:26:52.700196Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:26:52.701305Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:26:52.702527Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:26:52.702619Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:26:52.702654Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:26:52.702665Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:26:52.703702Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-29T02:26:52.743058Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:26:52.743168Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:27:13.897682Z","caller":"traceutil/trace.go:171","msg":"trace[1766325203] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"127.441672ms","start":"2024-02-29T02:27:13.770224Z","end":"2024-02-29T02:27:13.897665Z","steps":["trace[1766325203] 'process raft request'  (duration: 78.23075ms)","trace[1766325203] 'compare'  (duration: 49.113692ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-29T02:27:14.932422Z","caller":"traceutil/trace.go:171","msg":"trace[1040333960] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"117.326532ms","start":"2024-02-29T02:27:14.815078Z","end":"2024-02-29T02:27:14.932405Z","steps":["trace[1040333960] 'process raft request'  (duration: 65.273383ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T02:27:17.768011Z","caller":"traceutil/trace.go:171","msg":"trace[2134492581] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"108.658461ms","start":"2024-02-29T02:27:17.6593Z","end":"2024-02-29T02:27:17.767958Z","steps":["trace[2134492581] 'process raft request'  (duration: 75.037215ms)"],"step_count":1}
	
	
	==> gcp-auth [84bd50ad0c4f07715cdf41ae72931e6fcdeb5ba1740c66105092ca1514e00766] <==
	2024/02/29 02:28:34 GCP Auth Webhook started!
	2024/02/29 02:28:46 Ready to marshal response ...
	2024/02/29 02:28:46 Ready to write response ...
	2024/02/29 02:28:54 Ready to marshal response ...
	2024/02/29 02:28:54 Ready to write response ...
	2024/02/29 02:29:22 Ready to marshal response ...
	2024/02/29 02:29:22 Ready to write response ...
	2024/02/29 02:29:22 Ready to marshal response ...
	2024/02/29 02:29:22 Ready to write response ...
	2024/02/29 02:29:26 Ready to marshal response ...
	2024/02/29 02:29:26 Ready to write response ...
	2024/02/29 02:29:29 Ready to marshal response ...
	2024/02/29 02:29:29 Ready to write response ...
	2024/02/29 02:29:38 Ready to marshal response ...
	2024/02/29 02:29:38 Ready to write response ...
	2024/02/29 02:29:38 Ready to marshal response ...
	2024/02/29 02:29:38 Ready to write response ...
	2024/02/29 02:29:38 Ready to marshal response ...
	2024/02/29 02:29:38 Ready to write response ...
	2024/02/29 02:29:45 Ready to marshal response ...
	2024/02/29 02:29:45 Ready to write response ...
	2024/02/29 02:32:05 Ready to marshal response ...
	2024/02/29 02:32:05 Ready to write response ...
	
	
	==> kernel <==
	 02:32:31 up  6:14,  0 users,  load average: 0.29, 1.41, 2.15
	Linux addons-847636 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [93651cd6108ba68f41591180c5d1ac96f653546c952b2fdbde99d7105191c4fc] <==
	I0229 02:30:29.179805       1 main.go:227] handling current node
	I0229 02:30:39.184147       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:30:39.184171       1 main.go:227] handling current node
	I0229 02:30:49.193359       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:30:49.193384       1 main.go:227] handling current node
	I0229 02:30:59.220252       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:30:59.220283       1 main.go:227] handling current node
	I0229 02:31:09.227065       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:31:09.227090       1 main.go:227] handling current node
	I0229 02:31:19.231505       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:31:19.231725       1 main.go:227] handling current node
	I0229 02:31:29.244958       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:31:29.244991       1 main.go:227] handling current node
	I0229 02:31:39.248951       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:31:39.248978       1 main.go:227] handling current node
	I0229 02:31:49.259247       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:31:49.259361       1 main.go:227] handling current node
	I0229 02:31:59.263863       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:31:59.263891       1 main.go:227] handling current node
	I0229 02:32:09.276036       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:32:09.276064       1 main.go:227] handling current node
	I0229 02:32:19.280141       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:32:19.280170       1 main.go:227] handling current node
	I0229 02:32:29.291533       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:32:29.291561       1 main.go:227] handling current node
	
	
	==> kube-apiserver [e86b22466daf2e243b223dc2d80ef5eda2b627e92167fe55fddf32f3ca46130f] <==
	I0229 02:29:32.101810       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0229 02:29:38.071901       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.155.195"}
	I0229 02:29:44.198337       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 02:29:44.203832       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 02:29:44.213088       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 02:29:44.213164       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 02:29:44.227477       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 02:29:44.227530       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 02:29:44.240373       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 02:29:44.240500       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 02:29:44.249166       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 02:29:44.249222       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 02:29:44.260095       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 02:29:44.260228       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 02:29:44.275559       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 02:29:44.275621       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 02:29:44.285966       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 02:29:44.286380       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 02:29:44.840143       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	W0229 02:29:45.242029       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I0229 02:29:45.257591       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.194.48"}
	W0229 02:29:45.287404       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0229 02:29:45.300746       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0229 02:32:06.216349       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.12.223"}
	E0229 02:32:22.396625       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x4007eba2a0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x4006a046e0), ResponseWriter:(*httpsnoop.rw)(0x4006a046e0), Flusher:(*httpsnoop.rw)(0x4006a046e0), CloseNotifier:(*httpsnoop.rw)(0x4006a046e0), Pusher:(*httpsnoop.rw)(0x4006a046e0)}}, encoder:(*versioning.codec)(0x400d1228c0), memAllocator:(*runtime.Allocator)(0x4009a726d8)})
	
	
	==> kube-controller-manager [1fdce7de4e02b6dd028100bdddf4b54f9906fa940e3747503754a093d378bc24] <==
	W0229 02:31:11.978825       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 02:31:11.978866       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 02:31:24.125075       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 02:31:24.125112       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 02:31:49.311753       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 02:31:49.311787       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 02:31:50.091290       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 02:31:50.091324       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 02:31:54.874457       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 02:31:54.874493       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 02:31:58.853003       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 02:31:58.853036       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0229 02:32:05.920095       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0229 02:32:05.963399       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-xt9m7"
	I0229 02:32:05.973409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="53.110557ms"
	I0229 02:32:05.994027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.448694ms"
	I0229 02:32:05.994125       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="51.774µs"
	I0229 02:32:06.014331       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="82.173µs"
	I0229 02:32:10.361791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.539µs"
	I0229 02:32:11.366786       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="86.539µs"
	I0229 02:32:12.365406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="218.214µs"
	I0229 02:32:23.029318       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0229 02:32:23.035309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7967645744" duration="4.735µs"
	I0229 02:32:23.038157       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0229 02:32:26.403004       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="41.107µs"
	
	
	==> kube-proxy [b2d04fc89f8cc11cfe9f690a50a2f23cd8ef7fcb4d14f3f67092a6feeed9b8c3] <==
	I0229 02:27:17.685563       1 server_others.go:69] "Using iptables proxy"
	I0229 02:27:18.247478       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0229 02:27:18.588357       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0229 02:27:18.591830       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:27:18.591870       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0229 02:27:18.591877       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0229 02:27:18.591905       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:27:18.592307       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:27:18.592321       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:27:18.593416       1 config.go:188] "Starting service config controller"
	I0229 02:27:18.593444       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:27:18.593468       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:27:18.593473       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:27:18.622225       1 config.go:315] "Starting node config controller"
	I0229 02:27:18.622351       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:27:18.693920       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:27:18.703189       1 shared_informer.go:318] Caches are synced for service config
	I0229 02:27:18.722712       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c44467f60f2830718966887810c6d446ee91949ac2f6661f417a8c4dec232b22] <==
	W0229 02:26:56.623252       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 02:26:56.624598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 02:26:56.623305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 02:26:56.624627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 02:26:56.623357       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 02:26:56.624655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 02:26:56.623414       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 02:26:56.624678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 02:26:56.623421       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 02:26:56.624694       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 02:26:56.623466       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 02:26:56.624707       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 02:26:56.623470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 02:26:56.624724       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 02:26:57.547536       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 02:26:57.547676       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 02:26:57.647855       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 02:26:57.648066       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 02:26:57.648023       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 02:26:57.648157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 02:26:57.676884       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 02:26:57.676987       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 02:26:57.706985       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 02:26:57.707099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0229 02:26:58.201918       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 02:32:22 addons-847636 kubelet[1363]: E0229 02:32:22.038614    1363 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3a6ef8412f61be0eddd62e41dda5bdabf662cfff3615c536eedb48c6a1798e96/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3a6ef8412f61be0eddd62e41dda5bdabf662cfff3615c536eedb48c6a1798e96/diff: no such file or directory, extraDiskErr: <nil>
	Feb 29 02:32:22 addons-847636 kubelet[1363]: E0229 02:32:22.039116    1363 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/56ba150b9cf721b206e2955e0377c9ba3bf62900440d87e82fcd85c67ad3b2d7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/56ba150b9cf721b206e2955e0377c9ba3bf62900440d87e82fcd85c67ad3b2d7/diff: no such file or directory, extraDiskErr: <nil>
	Feb 29 02:32:22 addons-847636 kubelet[1363]: I0229 02:32:22.197412    1363 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq2h9\" (UniqueName: \"kubernetes.io/projected/7ae16640-9ae9-419c-80c8-be8130fa4f2a-kube-api-access-sq2h9\") pod \"7ae16640-9ae9-419c-80c8-be8130fa4f2a\" (UID: \"7ae16640-9ae9-419c-80c8-be8130fa4f2a\") "
	Feb 29 02:32:22 addons-847636 kubelet[1363]: I0229 02:32:22.199377    1363 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae16640-9ae9-419c-80c8-be8130fa4f2a-kube-api-access-sq2h9" (OuterVolumeSpecName: "kube-api-access-sq2h9") pod "7ae16640-9ae9-419c-80c8-be8130fa4f2a" (UID: "7ae16640-9ae9-419c-80c8-be8130fa4f2a"). InnerVolumeSpecName "kube-api-access-sq2h9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 29 02:32:22 addons-847636 kubelet[1363]: I0229 02:32:22.298404    1363 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sq2h9\" (UniqueName: \"kubernetes.io/projected/7ae16640-9ae9-419c-80c8-be8130fa4f2a-kube-api-access-sq2h9\") on node \"addons-847636\" DevicePath \"\""
	Feb 29 02:32:22 addons-847636 kubelet[1363]: I0229 02:32:22.370850    1363 scope.go:117] "RemoveContainer" containerID="a6ae6ebea89ef0b4d0ab6ba5afd72ad1007f9aa913b8db19538451ac52af596a"
	Feb 29 02:32:23 addons-847636 kubelet[1363]: I0229 02:32:23.467232    1363 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7ae16640-9ae9-419c-80c8-be8130fa4f2a" path="/var/lib/kubelet/pods/7ae16640-9ae9-419c-80c8-be8130fa4f2a/volumes"
	Feb 29 02:32:23 addons-847636 kubelet[1363]: I0229 02:32:23.468306    1363 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7b0e77fd-e59c-4a15-ba06-fb5f0eeb2fb1" path="/var/lib/kubelet/pods/7b0e77fd-e59c-4a15-ba06-fb5f0eeb2fb1/volumes"
	Feb 29 02:32:23 addons-847636 kubelet[1363]: I0229 02:32:23.468696    1363 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fcabc5a9-f6ff-4c7b-a981-e49b9307e27f" path="/var/lib/kubelet/pods/fcabc5a9-f6ff-4c7b-a981-e49b9307e27f/volumes"
	Feb 29 02:32:25 addons-847636 kubelet[1363]: I0229 02:32:25.466539    1363 scope.go:117] "RemoveContainer" containerID="1101c930aa07aefb3d4edda8d94e563c29135f6101771609d3a11c41320ff7c2"
	Feb 29 02:32:26 addons-847636 kubelet[1363]: I0229 02:32:26.380912    1363 scope.go:117] "RemoveContainer" containerID="4471d3b6f8826a44036ce087dd6acf435d41ee11a5f99a1b4e4f6e29f33b058f"
	Feb 29 02:32:26 addons-847636 kubelet[1363]: I0229 02:32:26.386561    1363 scope.go:117] "RemoveContainer" containerID="2274c8d8b9a656b556f21906689cf76550ade6ff07a4e1d8c4715493e8f48166"
	Feb 29 02:32:26 addons-847636 kubelet[1363]: E0229 02:32:26.387230    1363 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-xt9m7_default(4f0b5597-3005-44e5-9864-12fb8e257ed5)\"" pod="default/hello-world-app-5d77478584-xt9m7" podUID="4f0b5597-3005-44e5-9864-12fb8e257ed5"
	Feb 29 02:32:26 addons-847636 kubelet[1363]: I0229 02:32:26.413276    1363 scope.go:117] "RemoveContainer" containerID="4471d3b6f8826a44036ce087dd6acf435d41ee11a5f99a1b4e4f6e29f33b058f"
	Feb 29 02:32:26 addons-847636 kubelet[1363]: E0229 02:32:26.413663    1363 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4471d3b6f8826a44036ce087dd6acf435d41ee11a5f99a1b4e4f6e29f33b058f\": container with ID starting with 4471d3b6f8826a44036ce087dd6acf435d41ee11a5f99a1b4e4f6e29f33b058f not found: ID does not exist" containerID="4471d3b6f8826a44036ce087dd6acf435d41ee11a5f99a1b4e4f6e29f33b058f"
	Feb 29 02:32:26 addons-847636 kubelet[1363]: I0229 02:32:26.413711    1363 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4471d3b6f8826a44036ce087dd6acf435d41ee11a5f99a1b4e4f6e29f33b058f"} err="failed to get container status \"4471d3b6f8826a44036ce087dd6acf435d41ee11a5f99a1b4e4f6e29f33b058f\": rpc error: code = NotFound desc = could not find container \"4471d3b6f8826a44036ce087dd6acf435d41ee11a5f99a1b4e4f6e29f33b058f\": container with ID starting with 4471d3b6f8826a44036ce087dd6acf435d41ee11a5f99a1b4e4f6e29f33b058f not found: ID does not exist"
	Feb 29 02:32:26 addons-847636 kubelet[1363]: I0229 02:32:26.413724    1363 scope.go:117] "RemoveContainer" containerID="1101c930aa07aefb3d4edda8d94e563c29135f6101771609d3a11c41320ff7c2"
	Feb 29 02:32:26 addons-847636 kubelet[1363]: I0229 02:32:26.425432    1363 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1309e9a0-804a-4d1a-82c9-39daf26ceb32-webhook-cert\") pod \"1309e9a0-804a-4d1a-82c9-39daf26ceb32\" (UID: \"1309e9a0-804a-4d1a-82c9-39daf26ceb32\") "
	Feb 29 02:32:26 addons-847636 kubelet[1363]: I0229 02:32:26.425485    1363 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p95zk\" (UniqueName: \"kubernetes.io/projected/1309e9a0-804a-4d1a-82c9-39daf26ceb32-kube-api-access-p95zk\") pod \"1309e9a0-804a-4d1a-82c9-39daf26ceb32\" (UID: \"1309e9a0-804a-4d1a-82c9-39daf26ceb32\") "
	Feb 29 02:32:26 addons-847636 kubelet[1363]: I0229 02:32:26.428028    1363 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1309e9a0-804a-4d1a-82c9-39daf26ceb32-kube-api-access-p95zk" (OuterVolumeSpecName: "kube-api-access-p95zk") pod "1309e9a0-804a-4d1a-82c9-39daf26ceb32" (UID: "1309e9a0-804a-4d1a-82c9-39daf26ceb32"). InnerVolumeSpecName "kube-api-access-p95zk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 29 02:32:26 addons-847636 kubelet[1363]: I0229 02:32:26.430099    1363 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1309e9a0-804a-4d1a-82c9-39daf26ceb32-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1309e9a0-804a-4d1a-82c9-39daf26ceb32" (UID: "1309e9a0-804a-4d1a-82c9-39daf26ceb32"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 29 02:32:26 addons-847636 kubelet[1363]: I0229 02:32:26.526326    1363 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1309e9a0-804a-4d1a-82c9-39daf26ceb32-webhook-cert\") on node \"addons-847636\" DevicePath \"\""
	Feb 29 02:32:26 addons-847636 kubelet[1363]: I0229 02:32:26.526372    1363 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p95zk\" (UniqueName: \"kubernetes.io/projected/1309e9a0-804a-4d1a-82c9-39daf26ceb32-kube-api-access-p95zk\") on node \"addons-847636\" DevicePath \"\""
	Feb 29 02:32:26 addons-847636 kubelet[1363]: E0229 02:32:26.899307    1363 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4859b0c916f002dbf4197f08dcda60d1c93120efa383be47d0322859fceeadf7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4859b0c916f002dbf4197f08dcda60d1c93120efa383be47d0322859fceeadf7/diff: no such file or directory, extraDiskErr: <nil>
	Feb 29 02:32:27 addons-847636 kubelet[1363]: I0229 02:32:27.467520    1363 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1309e9a0-804a-4d1a-82c9-39daf26ceb32" path="/var/lib/kubelet/pods/1309e9a0-804a-4d1a-82c9-39daf26ceb32/volumes"
	
	
	==> storage-provisioner [5b6ba67b37067a84c2e26282ed5cae0a2fc6e482a16e16682bd3a5777dd56bf5] <==
	I0229 02:27:19.982087       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 02:27:20.335269       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 02:27:20.335394       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 02:27:20.788191       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 02:27:20.970068       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-847636_d40cff8e-ca5b-4ef0-85a8-e7b1e5971e29!
	I0229 02:27:21.004325       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c48cd6d-f41c-48a9-8f2e-1ae8de5a646f", APIVersion:"v1", ResourceVersion:"822", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-847636_d40cff8e-ca5b-4ef0-85a8-e7b1e5971e29 became leader
	I0229 02:27:21.191176       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-847636_d40cff8e-ca5b-4ef0-85a8-e7b1e5971e29!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-847636 -n addons-847636
helpers_test.go:261: (dbg) Run:  kubectl --context addons-847636 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.07s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-552840 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-552840 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (30.536082796s)

                                                
                                                
-- stdout --
	* [functional-552840] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node functional-552840 in cluster functional-552840
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Updating the running docker "functional-552840" container ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:35:41.638113 1173243 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-552840" hosting pod "coredns-5dd5756b68-8tbdk" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-552840" has status "Ready":"False"
	E0229 02:35:41.645347 1173243 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-552840" hosting pod "etcd-functional-552840" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-552840" has status "Ready":"False"
	E0229 02:35:41.652597 1173243 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-552840" hosting pod "kube-apiserver-functional-552840" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-552840" has status "Ready":"False"
	E0229 02:35:41.659730 1173243 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-552840" hosting pod "kube-controller-manager-functional-552840" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-552840" has status "Ready":"False"
	E0229 02:35:42.022176 1173243 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-552840" hosting pod "kube-proxy-2d98k" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-552840" has status "Ready":"False"
	E0229 02:35:42.219066 1173243 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-552840" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-552840": dial tcp 192.168.49.2:8441: connect: connection refused
	E0229 02:35:42.232332 1173243 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.49.2:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8441: connect: connection refused
	E0229 02:35:42.429941 1173243 start.go:894] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IPX Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-552840": Get "https://192.168.49.2:8441/api/v1/nodes/functional-552840": dial tcp 192.168.49.2:8441: connect: connection refused
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-552840 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 30.536285839s for "functional-552840" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-552840
helpers_test.go:235: (dbg) docker inspect functional-552840:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6252194fa0aef66495077c56ff554e6030c203d87e998c3b855436df8b6fa6c0",
	        "Created": "2024-02-29T02:33:53.692598427Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1168944,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T02:33:53.989789425Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/6252194fa0aef66495077c56ff554e6030c203d87e998c3b855436df8b6fa6c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6252194fa0aef66495077c56ff554e6030c203d87e998c3b855436df8b6fa6c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/6252194fa0aef66495077c56ff554e6030c203d87e998c3b855436df8b6fa6c0/hosts",
	        "LogPath": "/var/lib/docker/containers/6252194fa0aef66495077c56ff554e6030c203d87e998c3b855436df8b6fa6c0/6252194fa0aef66495077c56ff554e6030c203d87e998c3b855436df8b6fa6c0-json.log",
	        "Name": "/functional-552840",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-552840:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-552840",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b5489b180f6cd978898b3c7ffdd7176513c0d3ee0d98b723e31fd98683060077-init/diff:/var/lib/docker/overlay2/330c2f3296cde464d6c1a52ceb432efd04754f92c402ca5b9f20e3ccc2c40d71/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5489b180f6cd978898b3c7ffdd7176513c0d3ee0d98b723e31fd98683060077/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5489b180f6cd978898b3c7ffdd7176513c0d3ee0d98b723e31fd98683060077/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5489b180f6cd978898b3c7ffdd7176513c0d3ee0d98b723e31fd98683060077/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-552840",
	                "Source": "/var/lib/docker/volumes/functional-552840/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-552840",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-552840",
	                "name.minikube.sigs.k8s.io": "functional-552840",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8ce7384639ac9d3f169505a39ab67f19b892ab49db9932751bcadcf8d025f01",
	            "SandboxKey": "/var/run/docker/netns/c8ce7384639a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34047"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34046"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34043"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34045"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34044"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-552840": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6252194fa0ae",
	                        "functional-552840"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "e1458d5112b07d3cb1e4488082255815fa76faf508cf834ceeb32665060d1c6e",
	                    "EndpointID": "efbbb90e794c873b197b260185adc13c73beea782b7c2df1896f9a3327ae0636",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "functional-552840",
	                        "6252194fa0ae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-552840 -n functional-552840
helpers_test.go:239: (dbg) Done: out/minikube-linux-arm64 status --format={{.Host}} -p functional-552840 -n functional-552840: (4.266661307s)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-552840 logs -n 25: (1.73330446s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-870964 --log_dir                                                  | nospam-870964     | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:33 UTC |
	|         | /tmp/nospam-870964 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-870964 --log_dir                                                  | nospam-870964     | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:33 UTC |
	|         | /tmp/nospam-870964 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-870964 --log_dir                                                  | nospam-870964     | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:33 UTC |
	|         | /tmp/nospam-870964 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-870964 --log_dir                                                  | nospam-870964     | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:33 UTC |
	|         | /tmp/nospam-870964 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-870964 --log_dir                                                  | nospam-870964     | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:33 UTC |
	|         | /tmp/nospam-870964 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-870964 --log_dir                                                  | nospam-870964     | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:33 UTC |
	|         | /tmp/nospam-870964 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-870964                                                         | nospam-870964     | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:33 UTC |
	| start   | -p functional-552840                                                     | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:34 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                   |         |         |                     |                     |
	|         | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| start   | -p functional-552840                                                     | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:34 UTC | 29 Feb 24 02:35 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-552840 cache add                                              | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-552840 cache add                                              | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-552840 cache add                                              | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-552840 cache add                                              | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | minikube-local-cache-test:functional-552840                              |                   |         |         |                     |                     |
	| cache   | functional-552840 cache delete                                           | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | minikube-local-cache-test:functional-552840                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	| ssh     | functional-552840 ssh sudo                                               | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-552840                                                        | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-552840 ssh                                                    | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-552840 cache reload                                           | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	| ssh     | functional-552840 ssh                                                    | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-552840 kubectl --                                             | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | --context functional-552840                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-552840                                                     | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:35:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:35:11.971291 1173243 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:35:11.971454 1173243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:35:11.971459 1173243 out.go:304] Setting ErrFile to fd 2...
	I0229 02:35:11.971463 1173243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:35:11.971707 1173243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
	I0229 02:35:11.972084 1173243 out.go:298] Setting JSON to false
	I0229 02:35:11.973047 1173243 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22658,"bootTime":1709151454,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0229 02:35:11.973108 1173243 start.go:139] virtualization:  
	I0229 02:35:11.975682 1173243 out.go:177] * [functional-552840] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0229 02:35:11.977478 1173243 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:35:11.979316 1173243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:35:11.977535 1173243 notify.go:220] Checking for updates...
	I0229 02:35:11.982973 1173243 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	I0229 02:35:11.984819 1173243 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	I0229 02:35:11.986595 1173243 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0229 02:35:11.988324 1173243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:35:11.990724 1173243 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:35:11.990818 1173243 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:35:12.018681 1173243 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0229 02:35:12.018825 1173243 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:35:12.093068 1173243 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:69 SystemTime:2024-02-29 02:35:12.082964342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:35:12.093172 1173243 docker.go:295] overlay module found
	I0229 02:35:12.097493 1173243 out.go:177] * Using the docker driver based on existing profile
	I0229 02:35:12.099751 1173243 start.go:299] selected driver: docker
	I0229 02:35:12.099760 1173243 start.go:903] validating driver "docker" against &{Name:functional-552840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-552840 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:35:12.099863 1173243 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:35:12.099964 1173243 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:35:12.167418 1173243 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:69 SystemTime:2024-02-29 02:35:12.157418121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:35:12.167797 1173243 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:35:12.167857 1173243 cni.go:84] Creating CNI manager for ""
	I0229 02:35:12.167865 1173243 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0229 02:35:12.167871 1173243 start_flags.go:323] config:
	{Name:functional-552840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-552840 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:35:12.170279 1173243 out.go:177] * Starting control plane node functional-552840 in cluster functional-552840
	I0229 02:35:12.171891 1173243 cache.go:121] Beginning downloading kic base image for docker with crio
	I0229 02:35:12.173653 1173243 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 02:35:12.175335 1173243 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:35:12.175390 1173243 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0229 02:35:12.175416 1173243 cache.go:56] Caching tarball of preloaded images
	I0229 02:35:12.175432 1173243 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 02:35:12.175505 1173243 preload.go:174] Found /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0229 02:35:12.175514 1173243 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 02:35:12.175624 1173243 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/config.json ...
	I0229 02:35:12.193744 1173243 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 02:35:12.193759 1173243 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 02:35:12.193781 1173243 cache.go:194] Successfully downloaded all kic artifacts
	I0229 02:35:12.193810 1173243 start.go:365] acquiring machines lock for functional-552840: {Name:mk91e80c5c5c9e73e405b54d958824b37b1938d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:35:12.193904 1173243 start.go:369] acquired machines lock for "functional-552840" in 67.224µs
	I0229 02:35:12.193929 1173243 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:35:12.193934 1173243 fix.go:54] fixHost starting: 
	I0229 02:35:12.194250 1173243 cli_runner.go:164] Run: docker container inspect functional-552840 --format={{.State.Status}}
	I0229 02:35:12.216160 1173243 fix.go:102] recreateIfNeeded on functional-552840: state=Running err=<nil>
	W0229 02:35:12.216179 1173243 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:35:12.218160 1173243 out.go:177] * Updating the running docker "functional-552840" container ...
	I0229 02:35:12.219892 1173243 machine.go:88] provisioning docker machine ...
	I0229 02:35:12.219913 1173243 ubuntu.go:169] provisioning hostname "functional-552840"
	I0229 02:35:12.220022 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:12.236489 1173243 main.go:141] libmachine: Using SSH client type: native
	I0229 02:35:12.236749 1173243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 34047 <nil> <nil>}
	I0229 02:35:12.236758 1173243 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-552840 && echo "functional-552840" | sudo tee /etc/hostname
	I0229 02:35:12.380107 1173243 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-552840
	
	I0229 02:35:12.380179 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:12.397114 1173243 main.go:141] libmachine: Using SSH client type: native
	I0229 02:35:12.397386 1173243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 34047 <nil> <nil>}
	I0229 02:35:12.397402 1173243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-552840' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-552840/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-552840' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:35:12.528044 1173243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:35:12.528061 1173243 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18063-1148303/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-1148303/.minikube}
	I0229 02:35:12.528081 1173243 ubuntu.go:177] setting up certificates
	I0229 02:35:12.528090 1173243 provision.go:83] configureAuth start
	I0229 02:35:12.528151 1173243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-552840
	I0229 02:35:12.546153 1173243 provision.go:138] copyHostCerts
	I0229 02:35:12.546207 1173243 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.pem, removing ...
	I0229 02:35:12.546216 1173243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.pem
	I0229 02:35:12.546288 1173243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.pem (1082 bytes)
	I0229 02:35:12.546398 1173243 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-1148303/.minikube/cert.pem, removing ...
	I0229 02:35:12.546402 1173243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-1148303/.minikube/cert.pem
	I0229 02:35:12.546428 1173243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-1148303/.minikube/cert.pem (1123 bytes)
	I0229 02:35:12.546478 1173243 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-1148303/.minikube/key.pem, removing ...
	I0229 02:35:12.546481 1173243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-1148303/.minikube/key.pem
	I0229 02:35:12.546503 1173243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-1148303/.minikube/key.pem (1675 bytes)
	I0229 02:35:12.546544 1173243 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca-key.pem org=jenkins.functional-552840 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-552840]
	I0229 02:35:12.808371 1173243 provision.go:172] copyRemoteCerts
	I0229 02:35:12.808431 1173243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:35:12.808470 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:12.824519 1173243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/functional-552840/id_rsa Username:docker}
	I0229 02:35:12.925146 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:35:12.950241 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:35:12.975391 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:35:13.000734 1173243 provision.go:86] duration metric: configureAuth took 472.629442ms
	I0229 02:35:13.000755 1173243 ubuntu.go:193] setting minikube options for container-runtime
	I0229 02:35:13.000990 1173243 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:35:13.001124 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:13.017928 1173243 main.go:141] libmachine: Using SSH client type: native
	I0229 02:35:13.018150 1173243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 34047 <nil> <nil>}
	I0229 02:35:13.018161 1173243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:35:18.403955 1173243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:35:18.403967 1173243 machine.go:91] provisioned docker machine in 6.184066095s
	I0229 02:35:18.403978 1173243 start.go:300] post-start starting for "functional-552840" (driver="docker")
	I0229 02:35:18.404015 1173243 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:35:18.404089 1173243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:35:18.404129 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:18.421798 1173243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/functional-552840/id_rsa Username:docker}
	I0229 02:35:18.516918 1173243 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:35:18.520314 1173243 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0229 02:35:18.520339 1173243 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0229 02:35:18.520351 1173243 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0229 02:35:18.520357 1173243 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0229 02:35:18.520366 1173243 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-1148303/.minikube/addons for local assets ...
	I0229 02:35:18.520422 1173243 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-1148303/.minikube/files for local assets ...
	I0229 02:35:18.520500 1173243 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/11536582.pem -> 11536582.pem in /etc/ssl/certs
	I0229 02:35:18.520580 1173243 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/test/nested/copy/1153658/hosts -> hosts in /etc/test/nested/copy/1153658
	I0229 02:35:18.520626 1173243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1153658
	I0229 02:35:18.529483 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/11536582.pem --> /etc/ssl/certs/11536582.pem (1708 bytes)
	I0229 02:35:18.554044 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/test/nested/copy/1153658/hosts --> /etc/test/nested/copy/1153658/hosts (40 bytes)
	I0229 02:35:18.578768 1173243 start.go:303] post-start completed in 174.775589ms
	I0229 02:35:18.578843 1173243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 02:35:18.578903 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:18.604215 1173243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/functional-552840/id_rsa Username:docker}
	I0229 02:35:18.692925 1173243 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 02:35:18.697972 1173243 fix.go:56] fixHost completed within 6.504030621s
	I0229 02:35:18.697989 1173243 start.go:83] releasing machines lock for "functional-552840", held for 6.504077595s
	I0229 02:35:18.698056 1173243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-552840
	I0229 02:35:18.715122 1173243 ssh_runner.go:195] Run: cat /version.json
	I0229 02:35:18.715149 1173243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:35:18.715164 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:18.715199 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:18.739527 1173243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/functional-552840/id_rsa Username:docker}
	I0229 02:35:18.740050 1173243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/functional-552840/id_rsa Username:docker}
	I0229 02:35:18.942187 1173243 ssh_runner.go:195] Run: systemctl --version
	I0229 02:35:18.946623 1173243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:35:19.088204 1173243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 02:35:19.092518 1173243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:35:19.101459 1173243 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0229 02:35:19.101533 1173243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:35:19.110620 1173243 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0229 02:35:19.110634 1173243 start.go:475] detecting cgroup driver to use...
	I0229 02:35:19.110667 1173243 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 02:35:19.110713 1173243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:35:19.123571 1173243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:35:19.136237 1173243 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:35:19.136290 1173243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:35:19.149879 1173243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:35:19.161803 1173243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:35:19.293314 1173243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:35:19.430941 1173243 docker.go:233] disabling docker service ...
	I0229 02:35:19.430998 1173243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:35:19.444535 1173243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:35:19.456228 1173243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:35:19.578188 1173243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:35:19.697974 1173243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:35:19.708954 1173243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:35:19.726360 1173243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:35:19.726427 1173243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:35:19.735945 1173243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:35:19.736095 1173243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:35:19.746121 1173243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:35:19.755691 1173243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:35:19.765403 1173243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:35:19.774535 1173243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:35:19.782952 1173243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:35:19.791460 1173243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:35:19.910281 1173243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:35:26.549103 1173243 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.638797636s)
	I0229 02:35:26.549119 1173243 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:35:26.549178 1173243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:35:26.553348 1173243 start.go:543] Will wait 60s for crictl version
	I0229 02:35:26.553401 1173243 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.556918 1173243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:35:26.591422 1173243 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0229 02:35:26.591498 1173243 ssh_runner.go:195] Run: crio --version
	I0229 02:35:26.629016 1173243 ssh_runner.go:195] Run: crio --version
	I0229 02:35:26.668831 1173243 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0229 02:35:26.670709 1173243 cli_runner.go:164] Run: docker network inspect functional-552840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 02:35:26.686368 1173243 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0229 02:35:26.692104 1173243 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0229 02:35:26.693901 1173243 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:35:26.693980 1173243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:35:26.737092 1173243 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:35:26.737104 1173243 crio.go:415] Images already preloaded, skipping extraction
	I0229 02:35:26.737154 1173243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:35:26.772751 1173243 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:35:26.772763 1173243 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:35:26.772845 1173243 ssh_runner.go:195] Run: crio config
	I0229 02:35:26.843031 1173243 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0229 02:35:26.843064 1173243 cni.go:84] Creating CNI manager for ""
	I0229 02:35:26.843073 1173243 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0229 02:35:26.843084 1173243 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:35:26.843101 1173243 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-552840 NodeName:functional-552840 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:35:26.843233 1173243 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-552840"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:35:26.843298 1173243 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=functional-552840 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-552840 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0229 02:35:26.843369 1173243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:35:26.852196 1173243 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:35:26.852270 1173243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:35:26.860842 1173243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (427 bytes)
	I0229 02:35:26.878759 1173243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:35:26.897773 1173243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1948 bytes)
	I0229 02:35:26.916285 1173243 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0229 02:35:26.919836 1173243 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840 for IP: 192.168.49.2
	I0229 02:35:26.919863 1173243 certs.go:190] acquiring lock for shared ca certs: {Name:mk629bf08f2bf9bf9dfe188d027237a0e3bc8e94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:35:26.920107 1173243 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.key
	I0229 02:35:26.920167 1173243 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.key
	I0229 02:35:26.920247 1173243 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.key
	I0229 02:35:26.920300 1173243 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/apiserver.key.dd3b5fb2
	I0229 02:35:26.920341 1173243 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/proxy-client.key
	I0229 02:35:26.920444 1173243 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/1153658.pem (1338 bytes)
	W0229 02:35:26.920474 1173243 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/1153658_empty.pem, impossibly tiny 0 bytes
	I0229 02:35:26.920482 1173243 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 02:35:26.920507 1173243 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:35:26.920530 1173243 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:35:26.920550 1173243 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/key.pem (1675 bytes)
	I0229 02:35:26.920591 1173243 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/11536582.pem (1708 bytes)
	I0229 02:35:26.921207 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:35:26.944766 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:35:26.968706 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:35:26.992831 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:35:27.020827 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:35:27.047057 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:35:27.072629 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:35:27.097900 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 02:35:27.122834 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:35:27.147581 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/1153658.pem --> /usr/share/ca-certificates/1153658.pem (1338 bytes)
	I0229 02:35:27.171830 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/11536582.pem --> /usr/share/ca-certificates/11536582.pem (1708 bytes)
	I0229 02:35:27.195856 1173243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:35:27.213782 1173243 ssh_runner.go:195] Run: openssl version
	I0229 02:35:27.219210 1173243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:35:27.228730 1173243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:35:27.232257 1173243 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:35:27.232320 1173243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:35:27.239211 1173243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:35:27.248293 1173243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1153658.pem && ln -fs /usr/share/ca-certificates/1153658.pem /etc/ssl/certs/1153658.pem"
	I0229 02:35:27.257753 1173243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1153658.pem
	I0229 02:35:27.261325 1173243 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 02:33 /usr/share/ca-certificates/1153658.pem
	I0229 02:35:27.261388 1173243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1153658.pem
	I0229 02:35:27.268447 1173243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1153658.pem /etc/ssl/certs/51391683.0"
	I0229 02:35:27.277435 1173243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11536582.pem && ln -fs /usr/share/ca-certificates/11536582.pem /etc/ssl/certs/11536582.pem"
	I0229 02:35:27.287415 1173243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11536582.pem
	I0229 02:35:27.291055 1173243 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 02:33 /usr/share/ca-certificates/11536582.pem
	I0229 02:35:27.291109 1173243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11536582.pem
	I0229 02:35:27.298287 1173243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11536582.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:35:27.307625 1173243 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:35:27.311007 1173243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:35:27.317815 1173243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:35:27.324964 1173243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:35:27.332033 1173243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:35:27.339072 1173243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:35:27.346158 1173243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:35:27.353169 1173243 kubeadm.go:404] StartCluster: {Name:functional-552840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-552840 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:35:27.353265 1173243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:35:27.353330 1173243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:35:27.391533 1173243 cri.go:89] found id: "c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5"
	I0229 02:35:27.391544 1173243 cri.go:89] found id: "bdeb88db5e381df3607fe01c5c069c2341cdce05126e9120f7e5563b31f24ba6"
	I0229 02:35:27.391549 1173243 cri.go:89] found id: "9ca0a2bfa68742f14f269004a84e16b497170712db0a2560a68ae05ada341959"
	I0229 02:35:27.391552 1173243 cri.go:89] found id: "a5d58ef036431a8b1372c65a6be540c3a5c6454bd9c26ac1100bed8d4fda8481"
	I0229 02:35:27.391554 1173243 cri.go:89] found id: "19ce3862c0c2026f19e1fb2a8e2c3bb1d28fa9f100f0fa1292b1587f5308ac6c"
	I0229 02:35:27.391557 1173243 cri.go:89] found id: "efe1e980fdbf9c294b396e3121d52c5f640f67c72539dd94e3f35d8ec10317a8"
	I0229 02:35:27.391560 1173243 cri.go:89] found id: "d8a16037227628d5fb5de51a2dfcc744c42a2954b082ff1b61a2a303be38597a"
	I0229 02:35:27.391562 1173243 cri.go:89] found id: "6e92f8b2eba8cfac95cf4102c72f2fbc71182ca6dfbfe66a6b7a531b14771de3"
	I0229 02:35:27.391564 1173243 cri.go:89] found id: "bc4ae96187989d355202ac95aa2f59901d815be233807c61e9a68a3fb0f1c27f"
	I0229 02:35:27.391569 1173243 cri.go:89] found id: "cbf436f6e5267f33cffcf67cea8b2246e948a7341701898c4ee42e5a985b4b3c"
	I0229 02:35:27.391572 1173243 cri.go:89] found id: "2b0274a27cd4188b99a7ce81f7a4b08de4e54e2c9b28f7971dfc8d3ade720d51"
	I0229 02:35:27.391574 1173243 cri.go:89] found id: "046fa64f2f4673fdaebc883384d0a23d01e7c6a196fbe99f6c8dce5ef3d62cda"
	I0229 02:35:27.391576 1173243 cri.go:89] found id: "3bcf1eea258fb53a0de8f2f11c842d89079b1afb774231094db080223c3baa60"
	I0229 02:35:27.391578 1173243 cri.go:89] found id: "6d678d55db42298c10ff914ce710ed697ee6e1dc73edb5d71b9e723a84dfda7f"
	I0229 02:35:27.391584 1173243 cri.go:89] found id: "ce4aaddcb5601ac2818903bb5d2d76c9985baad7620afc42c1a81568f015e073"
	I0229 02:35:27.391586 1173243 cri.go:89] found id: "624fc2a129334e508d29d4775c90e3d489a1da1f8bde5a53b1eea6e3114b752b"
	I0229 02:35:27.391588 1173243 cri.go:89] found id: ""
	I0229 02:35:27.391635 1173243 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Feb 29 02:35:41 functional-552840 crio[3876]: time="2024-02-29 02:35:41.491817111Z" level=info msg="Starting container: cac1e01798cbb742c5bbeadcda9fd910bcf270eb81e50eb6772e434e5f489fea" id=1630ab51-5427-457f-964b-5f2df0ce31d5 name=/runtime.v1.RuntimeService/StartContainer
	Feb 29 02:35:41 functional-552840 crio[3876]: time="2024-02-29 02:35:41.504246170Z" level=info msg="Started container" PID=4872 containerID=cac1e01798cbb742c5bbeadcda9fd910bcf270eb81e50eb6772e434e5f489fea description=kube-system/storage-provisioner/storage-provisioner id=1630ab51-5427-457f-964b-5f2df0ce31d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cb8464efcbc81f5127d7278965f37404be327eddc0994855f1c1f93ee779ab29
	Feb 29 02:35:41 functional-552840 conmon[4805]: conmon 7439b5d3d896b4ad304a <ninfo>: container 4816 exited with status 1
	Feb 29 02:35:41 functional-552840 crio[3876]: time="2024-02-29 02:35:41.552535903Z" level=info msg="Created container 1f68da0225b139c0ce0511e6caa30f9c805eed8e92911f0fccf312063e176217: kube-system/kube-proxy-2d98k/kube-proxy" id=6959c839-d39a-4b14-9a40-300c7f2b89f9 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 29 02:35:41 functional-552840 crio[3876]: time="2024-02-29 02:35:41.553223760Z" level=info msg="Starting container: 1f68da0225b139c0ce0511e6caa30f9c805eed8e92911f0fccf312063e176217" id=5f273844-4623-4ed5-af2f-e1fbc1660f9a name=/runtime.v1.RuntimeService/StartContainer
	Feb 29 02:35:41 functional-552840 crio[3876]: time="2024-02-29 02:35:41.569549248Z" level=info msg="Started container" PID=4864 containerID=1f68da0225b139c0ce0511e6caa30f9c805eed8e92911f0fccf312063e176217 description=kube-system/kube-proxy-2d98k/kube-proxy id=5f273844-4623-4ed5-af2f-e1fbc1660f9a name=/runtime.v1.RuntimeService/StartContainer sandboxID=6e2078382a88a6e616609dc135a82e70f6e7d2ae553d9dfb24b6363f9fc82a6a
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.118795445Z" level=info msg="Stopping container: a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee (timeout: 2s)" id=3ac0a2d7-59ac-46ba-8236-651dd559013e name=/runtime.v1.RuntimeService/StopContainer
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.302606171Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.28.4" id=fac563b0-261a-47e1-8b24-6c08e7ec632d name=/runtime.v1.ImageService/ImageStatus
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.302870620Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419,RepoTags:[registry.k8s.io/kube-apiserver:v1.28.4],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2],Size_:121119694,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=fac563b0-261a-47e1-8b24-6c08e7ec632d name=/runtime.v1.ImageService/ImageStatus
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.310490221Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.28.4" id=c7be3898-ce8a-42a2-97bf-c0be13ce550c name=/runtime.v1.ImageService/ImageStatus
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.310865587Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419,RepoTags:[registry.k8s.io/kube-apiserver:v1.28.4],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2],Size_:121119694,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=c7be3898-ce8a-42a2-97bf-c0be13ce550c name=/runtime.v1.ImageService/ImageStatus
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.315976467Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-552840/kube-apiserver" id=30d2058e-de95-4cba-bb4c-754c7c820f80 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.316270349Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.468037692Z" level=info msg="Created container 2dd23e64b5a04f491496e32c65545edfaa3cff14278ad1d25a28f5752d73e777: kube-system/kube-apiserver-functional-552840/kube-apiserver" id=30d2058e-de95-4cba-bb4c-754c7c820f80 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.468576315Z" level=info msg="Starting container: 2dd23e64b5a04f491496e32c65545edfaa3cff14278ad1d25a28f5752d73e777" id=52f187f0-938b-446f-bd63-08e9b2d44fea name=/runtime.v1.RuntimeService/StartContainer
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.477791090Z" level=info msg="Started container" PID=5065 containerID=2dd23e64b5a04f491496e32c65545edfaa3cff14278ad1d25a28f5752d73e777 description=kube-system/kube-apiserver-functional-552840/kube-apiserver id=52f187f0-938b-446f-bd63-08e9b2d44fea name=/runtime.v1.RuntimeService/StartContainer sandboxID=a41c0e7761bd985efcc079309592a15c0157fe88c758e8b11ff5231a6087a38c
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.130070267Z" level=warning msg="Stopping container a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=3ac0a2d7-59ac-46ba-8236-651dd559013e name=/runtime.v1.RuntimeService/StopContainer
	Feb 29 02:35:44 functional-552840 conmon[4199]: conmon a0afe963a15ff316218c <ninfo>: container 4243 exited with status 137
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.290676567Z" level=info msg="Stopped container a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee: kube-system/kube-apiserver-functional-552840/kube-apiserver" id=3ac0a2d7-59ac-46ba-8236-651dd559013e name=/runtime.v1.RuntimeService/StopContainer
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.291324498Z" level=info msg="Stopping pod sandbox: 4c67c1acabc26b1f0deee3c85cb80278e07cc994b48aec974d02e1d1d2606d1f" id=fa600102-3ad1-46a2-8a37-fa9868b66d81 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.292396869Z" level=info msg="Stopped pod sandbox: 4c67c1acabc26b1f0deee3c85cb80278e07cc994b48aec974d02e1d1d2606d1f" id=fa600102-3ad1-46a2-8a37-fa9868b66d81 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.300312074Z" level=info msg="Removing container: a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee" id=2651d776-3552-46e9-840f-65b678cc73ae name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.323906117Z" level=info msg="Removed container a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee: kube-system/kube-apiserver-functional-552840/kube-apiserver" id=2651d776-3552-46e9-840f-65b678cc73ae name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.325506543Z" level=info msg="Removing container: c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5" id=6cc95ad7-84f8-4ffa-84ed-6ac466316a3f name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.349813033Z" level=info msg="Removed container c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5: kube-system/kube-apiserver-functional-552840/kube-apiserver" id=6cc95ad7-84f8-4ffa-84ed-6ac466316a3f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2dd23e64b5a04       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   5 seconds ago        Running             kube-apiserver            1                   a41c0e7761bd9       kube-apiserver-functional-552840
	1f68da0225b13       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   6 seconds ago        Running             kube-proxy                3                   6e2078382a88a       kube-proxy-2d98k
	cac1e01798cbb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   6 seconds ago        Running             storage-provisioner       3                   cb8464efcbc81       storage-provisioner
	7439b5d3d896b       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   6 seconds ago        Exited              kube-apiserver            0                   a41c0e7761bd9       kube-apiserver-functional-552840
	8865560f7ab58       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   15 seconds ago       Running             coredns                   2                   6034b00de29ce       coredns-5dd5756b68-8tbdk
	180892253ae85       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   19 seconds ago       Running             etcd                      2                   49c286fb82931       etcd-functional-552840
	f09ca2018856f       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d   19 seconds ago       Running             kindnet-cni               2                   abadc97ed2e7c       kindnet-jkrdt
	895d8bbb2ae15       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   19 seconds ago       Running             kube-scheduler            2                   c67cfd6e67653       kube-scheduler-functional-552840
	d1a6a3ea05682       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   19 seconds ago       Exited              storage-provisioner       2                   cb8464efcbc81       storage-provisioner
	a4266bd9b6f32       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   19 seconds ago       Exited              kube-proxy                2                   6e2078382a88a       kube-proxy-2d98k
	97b4c8e2965da       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   20 seconds ago       Running             kube-controller-manager   2                   c4a3625248e0d       kube-controller-manager-functional-552840
	bdeb88db5e381       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d   About a minute ago   Exited              kindnet-cni               1                   abadc97ed2e7c       kindnet-jkrdt
	9ca0a2bfa6874       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   About a minute ago   Exited              kube-scheduler            1                   c67cfd6e67653       kube-scheduler-functional-552840
	a5d58ef036431       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   About a minute ago   Exited              kube-controller-manager   1                   c4a3625248e0d       kube-controller-manager-functional-552840
	d8a1603722762       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   About a minute ago   Exited              etcd                      1                   49c286fb82931       etcd-functional-552840
	6e92f8b2eba8c       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   About a minute ago   Exited              coredns                   1                   6034b00de29ce       coredns-5dd5756b68-8tbdk
	
	
	==> coredns [6e92f8b2eba8cfac95cf4102c72f2fbc71182ca6dfbfe66a6b7a531b14771de3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37545 - 11872 "HINFO IN 7527231318132342997.4885945764046422916. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022920401s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8865560f7ab58fd49bb6fe07c9203f15ed871e8509a01e2b575efe7cb61f0af3] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36539 - 51779 "HINFO IN 2141434055078421631.5050951271688495009. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026712287s
	
	
	==> describe nodes <==
	Name:               functional-552840
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-552840
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=functional-552840
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T02_34_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:34:12 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-552840
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:35:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:35:40 +0000   Thu, 29 Feb 2024 02:34:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:35:40 +0000   Thu, 29 Feb 2024 02:34:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:35:40 +0000   Thu, 29 Feb 2024 02:34:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 29 Feb 2024 02:35:40 +0000   Thu, 29 Feb 2024 02:35:40 +0000   KubeletNotReady              container runtime status check may not have completed yet
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-552840
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 ecea58ec9aa0423bb3d994e859e645e8
	  System UUID:                3055bdb0-59c9-4fca-9eae-56c5680b16e6
	  Boot ID:                    d15cd6b5-a0a6-45f5-95b2-2521c5763941
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-8tbdk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     79s
	  kube-system                 etcd-functional-552840                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kindnet-jkrdt                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      80s
	  kube-system                 kube-apiserver-functional-552840             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kube-system                 kube-controller-manager-functional-552840    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-proxy-2d98k                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-scheduler-functional-552840             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From             Message
	  ----     ------                   ----                ----             -------
	  Normal   Starting                 77s                 kube-proxy       
	  Normal   Starting                 6s                  kube-proxy       
	  Normal   Starting                 15s                 kube-proxy       
	  Normal   Starting                 58s                 kube-proxy       
	  Normal   Starting                 100s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    99s (x8 over 100s)  kubelet          Node functional-552840 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  99s (x8 over 100s)  kubelet          Node functional-552840 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     99s (x8 over 100s)  kubelet          Node functional-552840 status is now: NodeHasSufficientPID
	  Normal   Starting                 91s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  91s                 kubelet          Node functional-552840 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    91s                 kubelet          Node functional-552840 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     91s                 kubelet          Node functional-552840 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                 node-controller  Node functional-552840 event: Registered Node functional-552840 in Controller
	  Normal   NodeReady                75s                 kubelet          Node functional-552840 status is now: NodeReady
	  Normal   RegisteredNode           46s                 node-controller  Node functional-552840 event: Registered Node functional-552840 in Controller
	  Warning  ContainerGCFailed        31s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 7s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  7s                  kubelet          Node functional-552840 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7s                  kubelet          Node functional-552840 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7s                  kubelet          Node functional-552840 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             7s                  kubelet          Node functional-552840 status is now: NodeNotReady
	  Normal   RegisteredNode           1s                  node-controller  Node functional-552840 event: Registered Node functional-552840 in Controller
	
	
	==> dmesg <==
	[  +0.001066] FS-Cache: O-key=[8] '3a3e5c0100000000'
	[  +0.000719] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000964] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=000000008300a153
	[  +0.001170] FS-Cache: N-key=[8] '3a3e5c0100000000'
	[  +0.002926] FS-Cache: Duplicate cookie detected
	[  +0.000709] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001140] FS-Cache: O-cookie d=000000007d8e8356{9p.inode} n=00000000aff28ed8
	[  +0.001147] FS-Cache: O-key=[8] '3a3e5c0100000000'
	[  +0.000713] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000968] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=0000000058168a0d
	[  +0.001153] FS-Cache: N-key=[8] '3a3e5c0100000000'
	[  +2.636492] FS-Cache: Duplicate cookie detected
	[  +0.000790] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001146] FS-Cache: O-cookie d=000000007d8e8356{9p.inode} n=00000000d8c7b9c4
	[  +0.001157] FS-Cache: O-key=[8] '393e5c0100000000'
	[  +0.000747] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=000000008300a153
	[  +0.001083] FS-Cache: N-key=[8] '393e5c0100000000'
	[  +0.299075] FS-Cache: Duplicate cookie detected
	[  +0.000712] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001037] FS-Cache: O-cookie d=000000007d8e8356{9p.inode} n=00000000aa687443
	[  +0.001052] FS-Cache: O-key=[8] '3f3e5c0100000000'
	[  +0.000728] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=00000000f8ac0df1
	[  +0.001102] FS-Cache: N-key=[8] '3f3e5c0100000000'
	
	
	==> etcd [180892253ae851029a69c43c6e89bf137f324ca1ae4cdd0d874e5738f7d0d9ec] <==
	{"level":"info","ts":"2024-02-29T02:35:28.21547Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T02:35:28.215523Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-02-29T02:35:28.215625Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T02:35:28.215653Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T02:35:28.215662Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T02:35:28.215845Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T02:35:28.21586Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T02:35:28.216375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-02-29T02:35:28.21644Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-02-29T02:35:28.216531Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:35:28.216562Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:35:29.864034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-29T02:35:29.864167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-29T02:35:29.864222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-02-29T02:35:29.864263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-02-29T02:35:29.864296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-02-29T02:35:29.864335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-02-29T02:35:29.864379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-02-29T02:35:29.8722Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-552840 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:35:29.87243Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:35:29.873423Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-29T02:35:29.873605Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:35:29.874553Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:35:29.876006Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:35:29.87608Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [d8a16037227628d5fb5de51a2dfcc744c42a2954b082ff1b61a2a303be38597a] <==
	{"level":"info","ts":"2024-02-29T02:34:45.790329Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T02:34:46.865381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T02:34:46.865438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T02:34:46.865459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-29T02:34:46.865473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T02:34:46.865482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-02-29T02:34:46.8655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-02-29T02:34:46.865508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-02-29T02:34:46.86839Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-552840 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:34:46.868538Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:34:46.86865Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:34:46.86966Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-29T02:34:46.870478Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:34:46.870545Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:34:46.888102Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:35:13.174428Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-29T02:35:13.174473Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-552840","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-02-29T02:35:13.174536Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T02:35:13.174728Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T02:35:13.331209Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T02:35:13.331267Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-29T02:35:13.331331Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-02-29T02:35:13.333689Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T02:35:13.333814Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T02:35:13.333861Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-552840","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 02:35:47 up  6:18,  0 users,  load average: 1.44, 1.41, 2.00
	Linux functional-552840 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [bdeb88db5e381df3607fe01c5c069c2341cdce05126e9120f7e5563b31f24ba6] <==
	I0229 02:34:45.700308       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0229 02:34:45.704089       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0229 02:34:45.706591       1 main.go:116] setting mtu 1500 for CNI 
	I0229 02:34:45.706668       1 main.go:146] kindnetd IP family: "ipv4"
	I0229 02:34:45.706712       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0229 02:34:49.584541       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:34:49.584655       1 main.go:227] handling current node
	I0229 02:34:59.598152       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:34:59.598178       1 main.go:227] handling current node
	I0229 02:35:09.609332       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:35:09.609437       1 main.go:227] handling current node
	
	
	==> kindnet [f09ca2018856f1e8ed6f18f814150d53d022797d675ca22e78bd16c440b944d6] <==
	I0229 02:35:27.937285       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0229 02:35:27.937500       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0229 02:35:27.937649       1 main.go:116] setting mtu 1500 for CNI 
	I0229 02:35:27.937666       1 main.go:146] kindnetd IP family: "ipv4"
	I0229 02:35:27.937681       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0229 02:35:28.122675       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0229 02:35:28.122924       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0229 02:35:32.076650       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:35:32.076685       1 main.go:227] handling current node
	I0229 02:35:42.090734       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:35:42.090780       1 main.go:227] handling current node
	
	
	==> kube-apiserver [2dd23e64b5a04f491496e32c65545edfaa3cff14278ad1d25a28f5752d73e777] <==
	I0229 02:35:44.998482       1 aggregator.go:164] waiting for initial CRD sync...
	I0229 02:35:44.999005       1 controller.go:116] Starting legacy_token_tracking_controller
	I0229 02:35:44.999086       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0229 02:35:44.999155       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0229 02:35:45.088435       1 available_controller.go:423] Starting AvailableConditionController
	I0229 02:35:45.088554       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0229 02:35:45.088626       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0229 02:35:45.088660       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0229 02:35:45.388874       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 02:35:45.389245       1 aggregator.go:166] initial CRD sync complete...
	I0229 02:35:45.389351       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 02:35:45.389358       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 02:35:45.399215       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 02:35:45.471556       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 02:35:45.488500       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 02:35:45.492137       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 02:35:45.492838       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0229 02:35:45.492861       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0229 02:35:45.493465       1 cache.go:39] Caches are synced for autoregister controller
	I0229 02:35:45.493599       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 02:35:45.498182       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 02:35:45.993165       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0229 02:35:46.350869       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0229 02:35:46.352231       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 02:35:46.369412       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [7439b5d3d896b4ad304abac1a1ff53a83243a7f3deba661c1d2132a286f1803d] <==
	I0229 02:35:41.521705       1 options.go:220] external host was not specified, using 192.168.49.2
	I0229 02:35:41.522782       1 server.go:148] Version: v1.28.4
	I0229 02:35:41.522815       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0229 02:35:41.523117       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-controller-manager [97b4c8e2965da7e5e8f21bc6bd3c5dc250205ac07f5ab0ce697b91990e0a158b] <==
	I0229 02:35:46.394983       1 shared_informer.go:318] Caches are synced for taint
	I0229 02:35:46.395130       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0229 02:35:46.395229       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-552840"
	I0229 02:35:46.395294       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0229 02:35:46.395344       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0229 02:35:46.395405       1 taint_manager.go:210] "Sending events to api server"
	I0229 02:35:46.397017       1 event.go:307] "Event occurred" object="functional-552840" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-552840 event: Registered Node functional-552840 in Controller"
	I0229 02:35:46.404659       1 shared_informer.go:318] Caches are synced for HPA
	I0229 02:35:46.409704       1 shared_informer.go:318] Caches are synced for cronjob
	I0229 02:35:46.412642       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-8tbdk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 02:35:46.418763       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-functional-552840" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 02:35:46.420257       1 event.go:307] "Event occurred" object="kube-system/kindnet-jkrdt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 02:35:46.420747       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-2d98k" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 02:35:46.427034       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-functional-552840" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 02:35:46.430864       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 02:35:46.447851       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 02:35:46.462318       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0229 02:35:46.481433       1 shared_informer.go:318] Caches are synced for disruption
	I0229 02:35:46.493067       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 02:35:46.567966       1 shared_informer.go:318] Caches are synced for attach detach
	I0229 02:35:46.760325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="347.05896ms"
	I0229 02:35:46.760511       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.092µs"
	I0229 02:35:46.937582       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 02:35:46.937686       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0229 02:35:46.973699       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [a5d58ef036431a8b1372c65a6be540c3a5c6454bd9c26ac1100bed8d4fda8481] <==
	I0229 02:35:01.165430       1 shared_informer.go:318] Caches are synced for service account
	I0229 02:35:01.180615       1 shared_informer.go:318] Caches are synced for TTL
	I0229 02:35:01.187106       1 shared_informer.go:318] Caches are synced for stateful set
	I0229 02:35:01.189398       1 shared_informer.go:318] Caches are synced for endpoint
	I0229 02:35:01.193669       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0229 02:35:01.203484       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0229 02:35:01.211978       1 shared_informer.go:318] Caches are synced for daemon sets
	I0229 02:35:01.214156       1 shared_informer.go:318] Caches are synced for taint
	I0229 02:35:01.214359       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0229 02:35:01.214487       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-552840"
	I0229 02:35:01.214582       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0229 02:35:01.214635       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0229 02:35:01.214697       1 taint_manager.go:210] "Sending events to api server"
	I0229 02:35:01.215033       1 event.go:307] "Event occurred" object="functional-552840" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-552840 event: Registered Node functional-552840 in Controller"
	I0229 02:35:01.239617       1 shared_informer.go:318] Caches are synced for job
	I0229 02:35:01.248823       1 shared_informer.go:318] Caches are synced for attach detach
	I0229 02:35:01.269156       1 shared_informer.go:318] Caches are synced for cronjob
	I0229 02:35:01.272054       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 02:35:01.288809       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0229 02:35:01.296055       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0229 02:35:01.309331       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0229 02:35:01.336732       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 02:35:01.709394       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 02:35:01.709426       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0229 02:35:01.728819       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [1f68da0225b139c0ce0511e6caa30f9c805eed8e92911f0fccf312063e176217] <==
	I0229 02:35:41.657678       1 server_others.go:69] "Using iptables proxy"
	I0229 02:35:41.674278       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0229 02:35:41.707754       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0229 02:35:41.709606       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:35:41.709706       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0229 02:35:41.709740       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0229 02:35:41.709805       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:35:41.710240       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:35:41.710294       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:35:41.711138       1 config.go:188] "Starting service config controller"
	I0229 02:35:41.711390       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:35:41.711425       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:35:41.711430       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:35:41.711962       1 config.go:315] "Starting node config controller"
	I0229 02:35:41.711974       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:35:41.812062       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:35:41.812160       1 shared_informer.go:318] Caches are synced for service config
	I0229 02:35:41.812185       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [a4266bd9b6f329b21ea63a7d11517dd9f756ce2d5ae0176a958e5513622c2f75] <==
	I0229 02:35:29.685312       1 server_others.go:69] "Using iptables proxy"
	I0229 02:35:32.108500       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0229 02:35:32.168544       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0229 02:35:32.170955       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:35:32.171069       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0229 02:35:32.171115       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0229 02:35:32.171175       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:35:32.171411       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:35:32.171655       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:35:32.172545       1 config.go:188] "Starting service config controller"
	I0229 02:35:32.172629       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:35:32.172690       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:35:32.172729       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:35:32.173306       1 config.go:315] "Starting node config controller"
	I0229 02:35:32.173370       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:35:32.273100       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:35:32.274187       1 shared_informer.go:318] Caches are synced for node config
	I0229 02:35:32.274233       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [895d8bbb2ae15cad2c4f4d1421f99d07fd15bd857293b3599a8fc09406054a89] <==
	W0229 02:35:32.025044       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 02:35:32.025125       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 02:35:32.025185       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 02:35:32.070081       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0229 02:35:32.070211       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:35:32.075954       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 02:35:32.076228       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 02:35:32.076456       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:35:32.076271       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 02:35:32.179922       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0229 02:35:45.317696       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	E0229 02:35:45.319375       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E0229 02:35:45.324148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E0229 02:35:45.324299       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0229 02:35:45.324380       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: unknown (get pods)
	E0229 02:35:45.324445       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E0229 02:35:45.324509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: unknown (get nodes)
	E0229 02:35:45.324577       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E0229 02:35:45.324635       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0229 02:35:45.324717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0229 02:35:45.328056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0229 02:35:45.328233       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services)
	E0229 02:35:45.328347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E0229 02:35:45.328401       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E0229 02:35:45.328459       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	
	
	==> kube-scheduler [9ca0a2bfa68742f14f269004a84e16b497170712db0a2560a68ae05ada341959] <==
	I0229 02:34:48.194524       1 serving.go:348] Generated self-signed cert in-memory
	I0229 02:34:50.563635       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0229 02:34:50.563722       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:34:50.568571       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 02:34:50.570480       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0229 02:34:50.570547       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0229 02:34:50.570592       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 02:34:50.572344       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 02:34:50.576057       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:34:50.576104       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0229 02:34:50.576112       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0229 02:34:50.671578       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0229 02:34:50.676849       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0229 02:34:50.676855       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:35:13.170088       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0229 02:35:13.170231       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0229 02:35:13.170488       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 29 02:35:42 functional-552840 kubelet[4739]: I0229 02:35:42.263786    4739 status_manager.go:853] "Failed to get status for pod" podUID="1d60fae10f3755c346fdf56ffdeab2a7" pod="kube-system/kube-apiserver-functional-552840" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-552840\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: E0229 02:35:42.263877    4739 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-552840\": dial tcp 192.168.49.2:8441: connect: connection refused" pod="kube-system/kube-apiserver-functional-552840"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: I0229 02:35:42.263973    4739 scope.go:117] "RemoveContainer" containerID="7439b5d3d896b4ad304abac1a1ff53a83243a7f3deba661c1d2132a286f1803d"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: I0229 02:35:42.277199    4739 status_manager.go:853] "Failed to get status for pod" podUID="1d60fae10f3755c346fdf56ffdeab2a7" pod="kube-system/kube-apiserver-functional-552840" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-552840\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: I0229 02:35:42.294198    4739 status_manager.go:853] "Failed to get status for pod" podUID="98b2078a-3db1-42f2-86d8-4d5502ef246a" pod="kube-system/kube-proxy-2d98k" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-2d98k\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: I0229 02:35:42.294479    4739 status_manager.go:853] "Failed to get status for pod" podUID="1d60fae10f3755c346fdf56ffdeab2a7" pod="kube-system/kube-apiserver-functional-552840" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-552840\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: I0229 02:35:42.294703    4739 status_manager.go:853] "Failed to get status for pod" podUID="c483ab17-00b8-4481-8ee4-310705be977b" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: I0229 02:35:42.294913    4739 status_manager.go:853] "Failed to get status for pod" podUID="98b2078a-3db1-42f2-86d8-4d5502ef246a" pod="kube-system/kube-proxy-2d98k" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-2d98k\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: I0229 02:35:42.301373    4739 status_manager.go:853] "Failed to get status for pod" podUID="72273c924315cc61d2ceae7fdf2436ce" pod="kube-system/kube-scheduler-functional-552840" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-552840\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: E0229 02:35:42.311281    4739 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-552840.17b834d3683d967a", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"598", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-552840", UID:"1d60fae10f3755c346fdf56ffdeab2a7", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Pulled", Message:"Container image \"registry.k8s.io/kube-apiserver:v1.28.4\" already present on ma
chine", Source:v1.EventSource{Component:"kubelet", Host:"functional-552840"}, FirstTimestamp:time.Date(2024, time.February, 29, 2, 35, 41, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 29, 2, 35, 42, 308846349, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-552840"}': 'Patch "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-552840.17b834d3683d967a": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	Feb 29 02:35:43 functional-552840 kubelet[4739]: I0229 02:35:43.294653    4739 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-552840" podUID="96ececed-4269-4263-b61c-a34badde7f99"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: I0229 02:35:44.297847    4739 scope.go:117] "RemoveContainer" containerID="a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: I0229 02:35:44.324480    4739 scope.go:117] "RemoveContainer" containerID="c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: I0229 02:35:44.350187    4739 scope.go:117] "RemoveContainer" containerID="a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: E0229 02:35:44.350653    4739 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee\": container with ID starting with a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee not found: ID does not exist" containerID="a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: I0229 02:35:44.350716    4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee"} err="failed to get container status \"a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee\": rpc error: code = NotFound desc = could not find container \"a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee\": container with ID starting with a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee not found: ID does not exist"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: I0229 02:35:44.350729    4739 scope.go:117] "RemoveContainer" containerID="c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: E0229 02:35:44.351202    4739 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5\": container with ID starting with c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5 not found: ID does not exist" containerID="c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: I0229 02:35:44.351236    4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5"} err="failed to get container status \"c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5\": rpc error: code = NotFound desc = could not find container \"c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5\": container with ID starting with c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5 not found: ID does not exist"
	Feb 29 02:35:45 functional-552840 kubelet[4739]: E0229 02:35:45.226476    4739 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 29 02:35:45 functional-552840 kubelet[4739]: E0229 02:35:45.232365    4739 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 29 02:35:45 functional-552840 kubelet[4739]: E0229 02:35:45.232693    4739 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 29 02:35:45 functional-552840 kubelet[4739]: I0229 02:35:45.415326    4739 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-552840"
	Feb 29 02:35:46 functional-552840 kubelet[4739]: I0229 02:35:46.117631    4739 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="94b23b4fa78b4e3ee66351d974ffe0be" path="/var/lib/kubelet/pods/94b23b4fa78b4e3ee66351d974ffe0be/volumes"
	Feb 29 02:35:46 functional-552840 kubelet[4739]: I0229 02:35:46.306461    4739 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-552840" podUID="96ececed-4269-4263-b61c-a34badde7f99"
	
	
	==> storage-provisioner [cac1e01798cbb742c5bbeadcda9fd910bcf270eb81e50eb6772e434e5f489fea] <==
	I0229 02:35:41.549654       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 02:35:41.575491       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 02:35:41.575649       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0229 02:35:45.039976       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [d1a6a3ea05682fa3b61b5ee196feadd6daaafc81fe0b16960f90224cf4c6398e] <==
	I0229 02:35:28.051920       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0229 02:35:28.053364       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:35:46.852229 1175093 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18063-1148303/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-552840 -n functional-552840
helpers_test.go:261: (dbg) Run:  kubectl --context functional-552840 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ExtraConfig FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ExtraConfig (37.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (2.71s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-552840 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:829: etcd is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2024-02-29 02:35:40 +0000 UTC ContainerStatuses:[{Name:etcd State:{Waiting:<nil> Running:0x40020036e0 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0x4000170070} Ready:false RestartCount:2 Image:registry.k8s.io/etcd:3.5.9-0 ImageID:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3 ContainerID:cri-o://180892253ae851029a69c43c6e89bf137f324ca1ae4cdd0d874e5738f7d0d9ec}]}
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2024-02-29 02:35:40 +0000 UTC ContainerStatuses:[{Name:kube-apiserver State:{Waiting:<nil> Running:0x4002003740 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0x4000170150} Ready:false RestartCount:1 Image:registry.k8s.io/kube-apiserver:v1.28.4 ImageID:registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb ContainerID:cri-o://2dd23e64b5a04f491496e32c65545edfaa3cff14278ad1d25a28f5752d73e777}]}
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:829: kube-controller-manager is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:True} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2024-02-29 02:35:40 +0000 UTC ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:<nil> Running:0x40020037b8 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0x40001701c0} Ready:true RestartCount:2 Image:registry.k8s.io/kube-controller-manager:v1.28.4 ImageID:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c ContainerID:cri-o://97b4c8e2965da7e5e8f21bc6bd3c5dc250205ac07f5ab0ce697b91990e0a158b}]}
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:829: kube-scheduler is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:True} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2024-02-29 02:35:40 +0000 UTC ContainerStatuses:[{Name:kube-scheduler State:{Waiting:<nil> Running:0x4002003830 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0x4000170230} Ready:true RestartCount:2 Image:registry.k8s.io/kube-scheduler:v1.28.4 ImageID:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba ContainerID:cri-o://895d8bbb2ae15cad2c4f4d1421f99d07fd15bd857293b3599a8fc09406054a89}]}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-552840
helpers_test.go:235: (dbg) docker inspect functional-552840:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6252194fa0aef66495077c56ff554e6030c203d87e998c3b855436df8b6fa6c0",
	        "Created": "2024-02-29T02:33:53.692598427Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1168944,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T02:33:53.989789425Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/6252194fa0aef66495077c56ff554e6030c203d87e998c3b855436df8b6fa6c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6252194fa0aef66495077c56ff554e6030c203d87e998c3b855436df8b6fa6c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/6252194fa0aef66495077c56ff554e6030c203d87e998c3b855436df8b6fa6c0/hosts",
	        "LogPath": "/var/lib/docker/containers/6252194fa0aef66495077c56ff554e6030c203d87e998c3b855436df8b6fa6c0/6252194fa0aef66495077c56ff554e6030c203d87e998c3b855436df8b6fa6c0-json.log",
	        "Name": "/functional-552840",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-552840:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-552840",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b5489b180f6cd978898b3c7ffdd7176513c0d3ee0d98b723e31fd98683060077-init/diff:/var/lib/docker/overlay2/330c2f3296cde464d6c1a52ceb432efd04754f92c402ca5b9f20e3ccc2c40d71/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5489b180f6cd978898b3c7ffdd7176513c0d3ee0d98b723e31fd98683060077/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5489b180f6cd978898b3c7ffdd7176513c0d3ee0d98b723e31fd98683060077/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5489b180f6cd978898b3c7ffdd7176513c0d3ee0d98b723e31fd98683060077/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-552840",
	                "Source": "/var/lib/docker/volumes/functional-552840/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-552840",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-552840",
	                "name.minikube.sigs.k8s.io": "functional-552840",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8ce7384639ac9d3f169505a39ab67f19b892ab49db9932751bcadcf8d025f01",
	            "SandboxKey": "/var/run/docker/netns/c8ce7384639a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34047"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34046"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34043"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34045"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34044"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-552840": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6252194fa0ae",
	                        "functional-552840"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "e1458d5112b07d3cb1e4488082255815fa76faf508cf834ceeb32665060d1c6e",
	                    "EndpointID": "efbbb90e794c873b197b260185adc13c73beea782b7c2df1896f9a3327ae0636",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "functional-552840",
	                        "6252194fa0ae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-552840 -n functional-552840
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-552840 logs -n 25: (1.77653947s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-870964 --log_dir                                                  | nospam-870964     | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:33 UTC |
	|         | /tmp/nospam-870964 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-870964 --log_dir                                                  | nospam-870964     | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:33 UTC |
	|         | /tmp/nospam-870964 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-870964 --log_dir                                                  | nospam-870964     | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:33 UTC |
	|         | /tmp/nospam-870964 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-870964 --log_dir                                                  | nospam-870964     | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:33 UTC |
	|         | /tmp/nospam-870964 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-870964 --log_dir                                                  | nospam-870964     | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:33 UTC |
	|         | /tmp/nospam-870964 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-870964 --log_dir                                                  | nospam-870964     | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:33 UTC |
	|         | /tmp/nospam-870964 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-870964                                                         | nospam-870964     | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:33 UTC |
	| start   | -p functional-552840                                                     | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:33 UTC | 29 Feb 24 02:34 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                   |         |         |                     |                     |
	|         | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| start   | -p functional-552840                                                     | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:34 UTC | 29 Feb 24 02:35 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-552840 cache add                                              | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-552840 cache add                                              | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-552840 cache add                                              | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-552840 cache add                                              | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | minikube-local-cache-test:functional-552840                              |                   |         |         |                     |                     |
	| cache   | functional-552840 cache delete                                           | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | minikube-local-cache-test:functional-552840                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	| ssh     | functional-552840 ssh sudo                                               | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-552840                                                        | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-552840 ssh                                                    | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-552840 cache reload                                           | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	| ssh     | functional-552840 ssh                                                    | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-552840 kubectl --                                             | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC | 29 Feb 24 02:35 UTC |
	|         | --context functional-552840                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-552840                                                     | functional-552840 | jenkins | v1.32.0 | 29 Feb 24 02:35 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:35:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:35:11.971291 1173243 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:35:11.971454 1173243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:35:11.971459 1173243 out.go:304] Setting ErrFile to fd 2...
	I0229 02:35:11.971463 1173243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:35:11.971707 1173243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
	I0229 02:35:11.972084 1173243 out.go:298] Setting JSON to false
	I0229 02:35:11.973047 1173243 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22658,"bootTime":1709151454,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0229 02:35:11.973108 1173243 start.go:139] virtualization:  
	I0229 02:35:11.975682 1173243 out.go:177] * [functional-552840] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0229 02:35:11.977478 1173243 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:35:11.979316 1173243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:35:11.977535 1173243 notify.go:220] Checking for updates...
	I0229 02:35:11.982973 1173243 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	I0229 02:35:11.984819 1173243 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	I0229 02:35:11.986595 1173243 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0229 02:35:11.988324 1173243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:35:11.990724 1173243 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:35:11.990818 1173243 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:35:12.018681 1173243 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0229 02:35:12.018825 1173243 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:35:12.093068 1173243 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:69 SystemTime:2024-02-29 02:35:12.082964342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:35:12.093172 1173243 docker.go:295] overlay module found
	I0229 02:35:12.097493 1173243 out.go:177] * Using the docker driver based on existing profile
	I0229 02:35:12.099751 1173243 start.go:299] selected driver: docker
	I0229 02:35:12.099760 1173243 start.go:903] validating driver "docker" against &{Name:functional-552840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-552840 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:35:12.099863 1173243 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:35:12.099964 1173243 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:35:12.167418 1173243 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:69 SystemTime:2024-02-29 02:35:12.157418121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:35:12.167797 1173243 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:35:12.167857 1173243 cni.go:84] Creating CNI manager for ""
	I0229 02:35:12.167865 1173243 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0229 02:35:12.167871 1173243 start_flags.go:323] config:
	{Name:functional-552840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-552840 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:35:12.170279 1173243 out.go:177] * Starting control plane node functional-552840 in cluster functional-552840
	I0229 02:35:12.171891 1173243 cache.go:121] Beginning downloading kic base image for docker with crio
	I0229 02:35:12.173653 1173243 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 02:35:12.175335 1173243 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:35:12.175390 1173243 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0229 02:35:12.175416 1173243 cache.go:56] Caching tarball of preloaded images
	I0229 02:35:12.175432 1173243 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 02:35:12.175505 1173243 preload.go:174] Found /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0229 02:35:12.175514 1173243 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 02:35:12.175624 1173243 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/config.json ...
	I0229 02:35:12.193744 1173243 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 02:35:12.193759 1173243 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 02:35:12.193781 1173243 cache.go:194] Successfully downloaded all kic artifacts
	I0229 02:35:12.193810 1173243 start.go:365] acquiring machines lock for functional-552840: {Name:mk91e80c5c5c9e73e405b54d958824b37b1938d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:35:12.193904 1173243 start.go:369] acquired machines lock for "functional-552840" in 67.224µs
	I0229 02:35:12.193929 1173243 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:35:12.193934 1173243 fix.go:54] fixHost starting: 
	I0229 02:35:12.194250 1173243 cli_runner.go:164] Run: docker container inspect functional-552840 --format={{.State.Status}}
	I0229 02:35:12.216160 1173243 fix.go:102] recreateIfNeeded on functional-552840: state=Running err=<nil>
	W0229 02:35:12.216179 1173243 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:35:12.218160 1173243 out.go:177] * Updating the running docker "functional-552840" container ...
	I0229 02:35:12.219892 1173243 machine.go:88] provisioning docker machine ...
	I0229 02:35:12.219913 1173243 ubuntu.go:169] provisioning hostname "functional-552840"
	I0229 02:35:12.220022 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:12.236489 1173243 main.go:141] libmachine: Using SSH client type: native
	I0229 02:35:12.236749 1173243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 34047 <nil> <nil>}
	I0229 02:35:12.236758 1173243 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-552840 && echo "functional-552840" | sudo tee /etc/hostname
	I0229 02:35:12.380107 1173243 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-552840
	
	I0229 02:35:12.380179 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:12.397114 1173243 main.go:141] libmachine: Using SSH client type: native
	I0229 02:35:12.397386 1173243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 34047 <nil> <nil>}
	I0229 02:35:12.397402 1173243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-552840' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-552840/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-552840' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:35:12.528044 1173243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:35:12.528061 1173243 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18063-1148303/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-1148303/.minikube}
	I0229 02:35:12.528081 1173243 ubuntu.go:177] setting up certificates
	I0229 02:35:12.528090 1173243 provision.go:83] configureAuth start
	I0229 02:35:12.528151 1173243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-552840
	I0229 02:35:12.546153 1173243 provision.go:138] copyHostCerts
	I0229 02:35:12.546207 1173243 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.pem, removing ...
	I0229 02:35:12.546216 1173243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.pem
	I0229 02:35:12.546288 1173243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.pem (1082 bytes)
	I0229 02:35:12.546398 1173243 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-1148303/.minikube/cert.pem, removing ...
	I0229 02:35:12.546402 1173243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-1148303/.minikube/cert.pem
	I0229 02:35:12.546428 1173243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-1148303/.minikube/cert.pem (1123 bytes)
	I0229 02:35:12.546478 1173243 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-1148303/.minikube/key.pem, removing ...
	I0229 02:35:12.546481 1173243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-1148303/.minikube/key.pem
	I0229 02:35:12.546503 1173243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-1148303/.minikube/key.pem (1675 bytes)
	I0229 02:35:12.546544 1173243 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca-key.pem org=jenkins.functional-552840 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-552840]
	I0229 02:35:12.808371 1173243 provision.go:172] copyRemoteCerts
	I0229 02:35:12.808431 1173243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:35:12.808470 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:12.824519 1173243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/functional-552840/id_rsa Username:docker}
	I0229 02:35:12.925146 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:35:12.950241 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:35:12.975391 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:35:13.000734 1173243 provision.go:86] duration metric: configureAuth took 472.629442ms
	I0229 02:35:13.000755 1173243 ubuntu.go:193] setting minikube options for container-runtime
	I0229 02:35:13.000990 1173243 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:35:13.001124 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:13.017928 1173243 main.go:141] libmachine: Using SSH client type: native
	I0229 02:35:13.018150 1173243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 34047 <nil> <nil>}
	I0229 02:35:13.018161 1173243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:35:18.403955 1173243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:35:18.403967 1173243 machine.go:91] provisioned docker machine in 6.184066095s
	I0229 02:35:18.403978 1173243 start.go:300] post-start starting for "functional-552840" (driver="docker")
	I0229 02:35:18.404015 1173243 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:35:18.404089 1173243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:35:18.404129 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:18.421798 1173243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/functional-552840/id_rsa Username:docker}
	I0229 02:35:18.516918 1173243 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:35:18.520314 1173243 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0229 02:35:18.520339 1173243 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0229 02:35:18.520351 1173243 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0229 02:35:18.520357 1173243 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0229 02:35:18.520366 1173243 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-1148303/.minikube/addons for local assets ...
	I0229 02:35:18.520422 1173243 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-1148303/.minikube/files for local assets ...
	I0229 02:35:18.520500 1173243 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/11536582.pem -> 11536582.pem in /etc/ssl/certs
	I0229 02:35:18.520580 1173243 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/test/nested/copy/1153658/hosts -> hosts in /etc/test/nested/copy/1153658
	I0229 02:35:18.520626 1173243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1153658
	I0229 02:35:18.529483 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/11536582.pem --> /etc/ssl/certs/11536582.pem (1708 bytes)
	I0229 02:35:18.554044 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/test/nested/copy/1153658/hosts --> /etc/test/nested/copy/1153658/hosts (40 bytes)
	I0229 02:35:18.578768 1173243 start.go:303] post-start completed in 174.775589ms
	I0229 02:35:18.578843 1173243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 02:35:18.578903 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:18.604215 1173243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/functional-552840/id_rsa Username:docker}
	I0229 02:35:18.692925 1173243 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 02:35:18.697972 1173243 fix.go:56] fixHost completed within 6.504030621s
	I0229 02:35:18.697989 1173243 start.go:83] releasing machines lock for "functional-552840", held for 6.504077595s
	I0229 02:35:18.698056 1173243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-552840
	I0229 02:35:18.715122 1173243 ssh_runner.go:195] Run: cat /version.json
	I0229 02:35:18.715149 1173243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:35:18.715164 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:18.715199 1173243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
	I0229 02:35:18.739527 1173243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/functional-552840/id_rsa Username:docker}
	I0229 02:35:18.740050 1173243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/functional-552840/id_rsa Username:docker}
	I0229 02:35:18.942187 1173243 ssh_runner.go:195] Run: systemctl --version
	I0229 02:35:18.946623 1173243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:35:19.088204 1173243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 02:35:19.092518 1173243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:35:19.101459 1173243 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0229 02:35:19.101533 1173243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:35:19.110620 1173243 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0229 02:35:19.110634 1173243 start.go:475] detecting cgroup driver to use...
	I0229 02:35:19.110667 1173243 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 02:35:19.110713 1173243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:35:19.123571 1173243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:35:19.136237 1173243 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:35:19.136290 1173243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:35:19.149879 1173243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:35:19.161803 1173243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:35:19.293314 1173243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:35:19.430941 1173243 docker.go:233] disabling docker service ...
	I0229 02:35:19.430998 1173243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:35:19.444535 1173243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:35:19.456228 1173243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:35:19.578188 1173243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:35:19.697974 1173243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:35:19.708954 1173243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:35:19.726360 1173243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:35:19.726427 1173243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:35:19.735945 1173243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:35:19.736095 1173243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:35:19.746121 1173243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:35:19.755691 1173243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:35:19.765403 1173243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:35:19.774535 1173243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:35:19.782952 1173243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:35:19.791460 1173243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:35:19.910281 1173243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:35:26.549103 1173243 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.638797636s)
	I0229 02:35:26.549119 1173243 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:35:26.549178 1173243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:35:26.553348 1173243 start.go:543] Will wait 60s for crictl version
	I0229 02:35:26.553401 1173243 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.556918 1173243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:35:26.591422 1173243 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0229 02:35:26.591498 1173243 ssh_runner.go:195] Run: crio --version
	I0229 02:35:26.629016 1173243 ssh_runner.go:195] Run: crio --version
	I0229 02:35:26.668831 1173243 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0229 02:35:26.670709 1173243 cli_runner.go:164] Run: docker network inspect functional-552840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 02:35:26.686368 1173243 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0229 02:35:26.692104 1173243 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0229 02:35:26.693901 1173243 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:35:26.693980 1173243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:35:26.737092 1173243 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:35:26.737104 1173243 crio.go:415] Images already preloaded, skipping extraction
	I0229 02:35:26.737154 1173243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:35:26.772751 1173243 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:35:26.772763 1173243 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:35:26.772845 1173243 ssh_runner.go:195] Run: crio config
	I0229 02:35:26.843031 1173243 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0229 02:35:26.843064 1173243 cni.go:84] Creating CNI manager for ""
	I0229 02:35:26.843073 1173243 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0229 02:35:26.843084 1173243 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:35:26.843101 1173243 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-552840 NodeName:functional-552840 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:35:26.843233 1173243 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-552840"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:35:26.843298 1173243 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=functional-552840 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-552840 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0229 02:35:26.843369 1173243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:35:26.852196 1173243 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:35:26.852270 1173243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:35:26.860842 1173243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (427 bytes)
	I0229 02:35:26.878759 1173243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:35:26.897773 1173243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1948 bytes)
	I0229 02:35:26.916285 1173243 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0229 02:35:26.919836 1173243 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840 for IP: 192.168.49.2
	I0229 02:35:26.919863 1173243 certs.go:190] acquiring lock for shared ca certs: {Name:mk629bf08f2bf9bf9dfe188d027237a0e3bc8e94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:35:26.920107 1173243 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.key
	I0229 02:35:26.920167 1173243 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.key
	I0229 02:35:26.920247 1173243 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.key
	I0229 02:35:26.920300 1173243 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/apiserver.key.dd3b5fb2
	I0229 02:35:26.920341 1173243 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/proxy-client.key
	I0229 02:35:26.920444 1173243 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/1153658.pem (1338 bytes)
	W0229 02:35:26.920474 1173243 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/1153658_empty.pem, impossibly tiny 0 bytes
	I0229 02:35:26.920482 1173243 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 02:35:26.920507 1173243 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:35:26.920530 1173243 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:35:26.920550 1173243 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/key.pem (1675 bytes)
	I0229 02:35:26.920591 1173243 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/11536582.pem (1708 bytes)
	I0229 02:35:26.921207 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:35:26.944766 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:35:26.968706 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:35:26.992831 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:35:27.020827 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:35:27.047057 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:35:27.072629 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:35:27.097900 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 02:35:27.122834 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:35:27.147581 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/1153658.pem --> /usr/share/ca-certificates/1153658.pem (1338 bytes)
	I0229 02:35:27.171830 1173243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/11536582.pem --> /usr/share/ca-certificates/11536582.pem (1708 bytes)
	I0229 02:35:27.195856 1173243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:35:27.213782 1173243 ssh_runner.go:195] Run: openssl version
	I0229 02:35:27.219210 1173243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:35:27.228730 1173243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:35:27.232257 1173243 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:35:27.232320 1173243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:35:27.239211 1173243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:35:27.248293 1173243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1153658.pem && ln -fs /usr/share/ca-certificates/1153658.pem /etc/ssl/certs/1153658.pem"
	I0229 02:35:27.257753 1173243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1153658.pem
	I0229 02:35:27.261325 1173243 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 02:33 /usr/share/ca-certificates/1153658.pem
	I0229 02:35:27.261388 1173243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1153658.pem
	I0229 02:35:27.268447 1173243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1153658.pem /etc/ssl/certs/51391683.0"
	I0229 02:35:27.277435 1173243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11536582.pem && ln -fs /usr/share/ca-certificates/11536582.pem /etc/ssl/certs/11536582.pem"
	I0229 02:35:27.287415 1173243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11536582.pem
	I0229 02:35:27.291055 1173243 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 02:33 /usr/share/ca-certificates/11536582.pem
	I0229 02:35:27.291109 1173243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11536582.pem
	I0229 02:35:27.298287 1173243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11536582.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:35:27.307625 1173243 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:35:27.311007 1173243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:35:27.317815 1173243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:35:27.324964 1173243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:35:27.332033 1173243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:35:27.339072 1173243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:35:27.346158 1173243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:35:27.353169 1173243 kubeadm.go:404] StartCluster: {Name:functional-552840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-552840 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:35:27.353265 1173243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:35:27.353330 1173243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:35:27.391533 1173243 cri.go:89] found id: "c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5"
	I0229 02:35:27.391544 1173243 cri.go:89] found id: "bdeb88db5e381df3607fe01c5c069c2341cdce05126e9120f7e5563b31f24ba6"
	I0229 02:35:27.391549 1173243 cri.go:89] found id: "9ca0a2bfa68742f14f269004a84e16b497170712db0a2560a68ae05ada341959"
	I0229 02:35:27.391552 1173243 cri.go:89] found id: "a5d58ef036431a8b1372c65a6be540c3a5c6454bd9c26ac1100bed8d4fda8481"
	I0229 02:35:27.391554 1173243 cri.go:89] found id: "19ce3862c0c2026f19e1fb2a8e2c3bb1d28fa9f100f0fa1292b1587f5308ac6c"
	I0229 02:35:27.391557 1173243 cri.go:89] found id: "efe1e980fdbf9c294b396e3121d52c5f640f67c72539dd94e3f35d8ec10317a8"
	I0229 02:35:27.391560 1173243 cri.go:89] found id: "d8a16037227628d5fb5de51a2dfcc744c42a2954b082ff1b61a2a303be38597a"
	I0229 02:35:27.391562 1173243 cri.go:89] found id: "6e92f8b2eba8cfac95cf4102c72f2fbc71182ca6dfbfe66a6b7a531b14771de3"
	I0229 02:35:27.391564 1173243 cri.go:89] found id: "bc4ae96187989d355202ac95aa2f59901d815be233807c61e9a68a3fb0f1c27f"
	I0229 02:35:27.391569 1173243 cri.go:89] found id: "cbf436f6e5267f33cffcf67cea8b2246e948a7341701898c4ee42e5a985b4b3c"
	I0229 02:35:27.391572 1173243 cri.go:89] found id: "2b0274a27cd4188b99a7ce81f7a4b08de4e54e2c9b28f7971dfc8d3ade720d51"
	I0229 02:35:27.391574 1173243 cri.go:89] found id: "046fa64f2f4673fdaebc883384d0a23d01e7c6a196fbe99f6c8dce5ef3d62cda"
	I0229 02:35:27.391576 1173243 cri.go:89] found id: "3bcf1eea258fb53a0de8f2f11c842d89079b1afb774231094db080223c3baa60"
	I0229 02:35:27.391578 1173243 cri.go:89] found id: "6d678d55db42298c10ff914ce710ed697ee6e1dc73edb5d71b9e723a84dfda7f"
	I0229 02:35:27.391584 1173243 cri.go:89] found id: "ce4aaddcb5601ac2818903bb5d2d76c9985baad7620afc42c1a81568f015e073"
	I0229 02:35:27.391586 1173243 cri.go:89] found id: "624fc2a129334e508d29d4775c90e3d489a1da1f8bde5a53b1eea6e3114b752b"
	I0229 02:35:27.391588 1173243 cri.go:89] found id: ""
	I0229 02:35:27.391635 1173243 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Feb 29 02:35:41 functional-552840 crio[3876]: time="2024-02-29 02:35:41.491817111Z" level=info msg="Starting container: cac1e01798cbb742c5bbeadcda9fd910bcf270eb81e50eb6772e434e5f489fea" id=1630ab51-5427-457f-964b-5f2df0ce31d5 name=/runtime.v1.RuntimeService/StartContainer
	Feb 29 02:35:41 functional-552840 crio[3876]: time="2024-02-29 02:35:41.504246170Z" level=info msg="Started container" PID=4872 containerID=cac1e01798cbb742c5bbeadcda9fd910bcf270eb81e50eb6772e434e5f489fea description=kube-system/storage-provisioner/storage-provisioner id=1630ab51-5427-457f-964b-5f2df0ce31d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cb8464efcbc81f5127d7278965f37404be327eddc0994855f1c1f93ee779ab29
	Feb 29 02:35:41 functional-552840 conmon[4805]: conmon 7439b5d3d896b4ad304a <ninfo>: container 4816 exited with status 1
	Feb 29 02:35:41 functional-552840 crio[3876]: time="2024-02-29 02:35:41.552535903Z" level=info msg="Created container 1f68da0225b139c0ce0511e6caa30f9c805eed8e92911f0fccf312063e176217: kube-system/kube-proxy-2d98k/kube-proxy" id=6959c839-d39a-4b14-9a40-300c7f2b89f9 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 29 02:35:41 functional-552840 crio[3876]: time="2024-02-29 02:35:41.553223760Z" level=info msg="Starting container: 1f68da0225b139c0ce0511e6caa30f9c805eed8e92911f0fccf312063e176217" id=5f273844-4623-4ed5-af2f-e1fbc1660f9a name=/runtime.v1.RuntimeService/StartContainer
	Feb 29 02:35:41 functional-552840 crio[3876]: time="2024-02-29 02:35:41.569549248Z" level=info msg="Started container" PID=4864 containerID=1f68da0225b139c0ce0511e6caa30f9c805eed8e92911f0fccf312063e176217 description=kube-system/kube-proxy-2d98k/kube-proxy id=5f273844-4623-4ed5-af2f-e1fbc1660f9a name=/runtime.v1.RuntimeService/StartContainer sandboxID=6e2078382a88a6e616609dc135a82e70f6e7d2ae553d9dfb24b6363f9fc82a6a
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.118795445Z" level=info msg="Stopping container: a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee (timeout: 2s)" id=3ac0a2d7-59ac-46ba-8236-651dd559013e name=/runtime.v1.RuntimeService/StopContainer
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.302606171Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.28.4" id=fac563b0-261a-47e1-8b24-6c08e7ec632d name=/runtime.v1.ImageService/ImageStatus
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.302870620Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419,RepoTags:[registry.k8s.io/kube-apiserver:v1.28.4],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2],Size_:121119694,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=fac563b0-261a-47e1-8b24-6c08e7ec632d name=/runtime.v1.ImageService/ImageStatus
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.310490221Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.28.4" id=c7be3898-ce8a-42a2-97bf-c0be13ce550c name=/runtime.v1.ImageService/ImageStatus
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.310865587Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419,RepoTags:[registry.k8s.io/kube-apiserver:v1.28.4],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2],Size_:121119694,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=c7be3898-ce8a-42a2-97bf-c0be13ce550c name=/runtime.v1.ImageService/ImageStatus
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.315976467Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-552840/kube-apiserver" id=30d2058e-de95-4cba-bb4c-754c7c820f80 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.316270349Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.468037692Z" level=info msg="Created container 2dd23e64b5a04f491496e32c65545edfaa3cff14278ad1d25a28f5752d73e777: kube-system/kube-apiserver-functional-552840/kube-apiserver" id=30d2058e-de95-4cba-bb4c-754c7c820f80 name=/runtime.v1.RuntimeService/CreateContainer
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.468576315Z" level=info msg="Starting container: 2dd23e64b5a04f491496e32c65545edfaa3cff14278ad1d25a28f5752d73e777" id=52f187f0-938b-446f-bd63-08e9b2d44fea name=/runtime.v1.RuntimeService/StartContainer
	Feb 29 02:35:42 functional-552840 crio[3876]: time="2024-02-29 02:35:42.477791090Z" level=info msg="Started container" PID=5065 containerID=2dd23e64b5a04f491496e32c65545edfaa3cff14278ad1d25a28f5752d73e777 description=kube-system/kube-apiserver-functional-552840/kube-apiserver id=52f187f0-938b-446f-bd63-08e9b2d44fea name=/runtime.v1.RuntimeService/StartContainer sandboxID=a41c0e7761bd985efcc079309592a15c0157fe88c758e8b11ff5231a6087a38c
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.130070267Z" level=warning msg="Stopping container a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=3ac0a2d7-59ac-46ba-8236-651dd559013e name=/runtime.v1.RuntimeService/StopContainer
	Feb 29 02:35:44 functional-552840 conmon[4199]: conmon a0afe963a15ff316218c <ninfo>: container 4243 exited with status 137
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.290676567Z" level=info msg="Stopped container a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee: kube-system/kube-apiserver-functional-552840/kube-apiserver" id=3ac0a2d7-59ac-46ba-8236-651dd559013e name=/runtime.v1.RuntimeService/StopContainer
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.291324498Z" level=info msg="Stopping pod sandbox: 4c67c1acabc26b1f0deee3c85cb80278e07cc994b48aec974d02e1d1d2606d1f" id=fa600102-3ad1-46a2-8a37-fa9868b66d81 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.292396869Z" level=info msg="Stopped pod sandbox: 4c67c1acabc26b1f0deee3c85cb80278e07cc994b48aec974d02e1d1d2606d1f" id=fa600102-3ad1-46a2-8a37-fa9868b66d81 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.300312074Z" level=info msg="Removing container: a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee" id=2651d776-3552-46e9-840f-65b678cc73ae name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.323906117Z" level=info msg="Removed container a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee: kube-system/kube-apiserver-functional-552840/kube-apiserver" id=2651d776-3552-46e9-840f-65b678cc73ae name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.325506543Z" level=info msg="Removing container: c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5" id=6cc95ad7-84f8-4ffa-84ed-6ac466316a3f name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 29 02:35:44 functional-552840 crio[3876]: time="2024-02-29 02:35:44.349813033Z" level=info msg="Removed container c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5: kube-system/kube-apiserver-functional-552840/kube-apiserver" id=6cc95ad7-84f8-4ffa-84ed-6ac466316a3f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2dd23e64b5a04       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   7 seconds ago        Running             kube-apiserver            1                   a41c0e7761bd9       kube-apiserver-functional-552840
	1f68da0225b13       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   8 seconds ago        Running             kube-proxy                3                   6e2078382a88a       kube-proxy-2d98k
	cac1e01798cbb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   8 seconds ago        Running             storage-provisioner       3                   cb8464efcbc81       storage-provisioner
	7439b5d3d896b       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   8 seconds ago        Exited              kube-apiserver            0                   a41c0e7761bd9       kube-apiserver-functional-552840
	8865560f7ab58       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   17 seconds ago       Running             coredns                   2                   6034b00de29ce       coredns-5dd5756b68-8tbdk
	180892253ae85       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   22 seconds ago       Running             etcd                      2                   49c286fb82931       etcd-functional-552840
	f09ca2018856f       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d   22 seconds ago       Running             kindnet-cni               2                   abadc97ed2e7c       kindnet-jkrdt
	895d8bbb2ae15       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   22 seconds ago       Running             kube-scheduler            2                   c67cfd6e67653       kube-scheduler-functional-552840
	d1a6a3ea05682       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   22 seconds ago       Exited              storage-provisioner       2                   cb8464efcbc81       storage-provisioner
	a4266bd9b6f32       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   22 seconds ago       Exited              kube-proxy                2                   6e2078382a88a       kube-proxy-2d98k
	97b4c8e2965da       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   22 seconds ago       Running             kube-controller-manager   2                   c4a3625248e0d       kube-controller-manager-functional-552840
	bdeb88db5e381       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d   About a minute ago   Exited              kindnet-cni               1                   abadc97ed2e7c       kindnet-jkrdt
	9ca0a2bfa6874       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   About a minute ago   Exited              kube-scheduler            1                   c67cfd6e67653       kube-scheduler-functional-552840
	a5d58ef036431       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   About a minute ago   Exited              kube-controller-manager   1                   c4a3625248e0d       kube-controller-manager-functional-552840
	d8a1603722762       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   About a minute ago   Exited              etcd                      1                   49c286fb82931       etcd-functional-552840
	6e92f8b2eba8c       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   About a minute ago   Exited              coredns                   1                   6034b00de29ce       coredns-5dd5756b68-8tbdk
	
	
	==> coredns [6e92f8b2eba8cfac95cf4102c72f2fbc71182ca6dfbfe66a6b7a531b14771de3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37545 - 11872 "HINFO IN 7527231318132342997.4885945764046422916. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022920401s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8865560f7ab58fd49bb6fe07c9203f15ed871e8509a01e2b575efe7cb61f0af3] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36539 - 51779 "HINFO IN 2141434055078421631.5050951271688495009. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026712287s
	
	
	==> describe nodes <==
	Name:               functional-552840
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-552840
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=functional-552840
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T02_34_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:34:12 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-552840
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:35:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:35:40 +0000   Thu, 29 Feb 2024 02:34:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:35:40 +0000   Thu, 29 Feb 2024 02:34:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:35:40 +0000   Thu, 29 Feb 2024 02:34:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 29 Feb 2024 02:35:40 +0000   Thu, 29 Feb 2024 02:35:40 +0000   KubeletNotReady              container runtime status check may not have completed yet
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-552840
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 ecea58ec9aa0423bb3d994e859e645e8
	  System UUID:                3055bdb0-59c9-4fca-9eae-56c5680b16e6
	  Boot ID:                    d15cd6b5-a0a6-45f5-95b2-2521c5763941
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-8tbdk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     82s
	  kube-system                 etcd-functional-552840                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         95s
	  kube-system                 kindnet-jkrdt                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      83s
	  kube-system                 kube-apiserver-functional-552840             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kube-controller-manager-functional-552840    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-proxy-2d98k                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-scheduler-functional-552840             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 80s                  kube-proxy       
	  Normal   Starting                 8s                   kube-proxy       
	  Normal   Starting                 18s                  kube-proxy       
	  Normal   Starting                 60s                  kube-proxy       
	  Normal   Starting                 103s                 kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    102s (x8 over 103s)  kubelet          Node functional-552840 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  102s (x8 over 103s)  kubelet          Node functional-552840 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     102s (x8 over 103s)  kubelet          Node functional-552840 status is now: NodeHasSufficientPID
	  Normal   Starting                 94s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  94s                  kubelet          Node functional-552840 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    94s                  kubelet          Node functional-552840 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     94s                  kubelet          Node functional-552840 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           83s                  node-controller  Node functional-552840 event: Registered Node functional-552840 in Controller
	  Normal   NodeReady                78s                  kubelet          Node functional-552840 status is now: NodeReady
	  Normal   RegisteredNode           49s                  node-controller  Node functional-552840 event: Registered Node functional-552840 in Controller
	  Warning  ContainerGCFailed        34s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 10s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10s                  kubelet          Node functional-552840 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10s                  kubelet          Node functional-552840 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10s                  kubelet          Node functional-552840 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             10s                  kubelet          Node functional-552840 status is now: NodeNotReady
	  Normal   RegisteredNode           4s                   node-controller  Node functional-552840 event: Registered Node functional-552840 in Controller
	
	
	==> dmesg <==
	[  +0.001066] FS-Cache: O-key=[8] '3a3e5c0100000000'
	[  +0.000719] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000964] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=000000008300a153
	[  +0.001170] FS-Cache: N-key=[8] '3a3e5c0100000000'
	[  +0.002926] FS-Cache: Duplicate cookie detected
	[  +0.000709] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001140] FS-Cache: O-cookie d=000000007d8e8356{9p.inode} n=00000000aff28ed8
	[  +0.001147] FS-Cache: O-key=[8] '3a3e5c0100000000'
	[  +0.000713] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000968] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=0000000058168a0d
	[  +0.001153] FS-Cache: N-key=[8] '3a3e5c0100000000'
	[  +2.636492] FS-Cache: Duplicate cookie detected
	[  +0.000790] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001146] FS-Cache: O-cookie d=000000007d8e8356{9p.inode} n=00000000d8c7b9c4
	[  +0.001157] FS-Cache: O-key=[8] '393e5c0100000000'
	[  +0.000747] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=000000008300a153
	[  +0.001083] FS-Cache: N-key=[8] '393e5c0100000000'
	[  +0.299075] FS-Cache: Duplicate cookie detected
	[  +0.000712] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001037] FS-Cache: O-cookie d=000000007d8e8356{9p.inode} n=00000000aa687443
	[  +0.001052] FS-Cache: O-key=[8] '3f3e5c0100000000'
	[  +0.000728] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=00000000f8ac0df1
	[  +0.001102] FS-Cache: N-key=[8] '3f3e5c0100000000'
	
	
	==> etcd [180892253ae851029a69c43c6e89bf137f324ca1ae4cdd0d874e5738f7d0d9ec] <==
	{"level":"info","ts":"2024-02-29T02:35:28.21547Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T02:35:28.215523Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-02-29T02:35:28.215625Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T02:35:28.215653Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T02:35:28.215662Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T02:35:28.215845Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T02:35:28.21586Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T02:35:28.216375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-02-29T02:35:28.21644Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-02-29T02:35:28.216531Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:35:28.216562Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:35:29.864034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-29T02:35:29.864167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-29T02:35:29.864222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-02-29T02:35:29.864263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-02-29T02:35:29.864296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-02-29T02:35:29.864335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-02-29T02:35:29.864379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-02-29T02:35:29.8722Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-552840 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:35:29.87243Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:35:29.873423Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-29T02:35:29.873605Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:35:29.874553Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:35:29.876006Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:35:29.87608Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [d8a16037227628d5fb5de51a2dfcc744c42a2954b082ff1b61a2a303be38597a] <==
	{"level":"info","ts":"2024-02-29T02:34:45.790329Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T02:34:46.865381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T02:34:46.865438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T02:34:46.865459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-29T02:34:46.865473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T02:34:46.865482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-02-29T02:34:46.8655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-02-29T02:34:46.865508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-02-29T02:34:46.86839Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-552840 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:34:46.868538Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:34:46.86865Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:34:46.86966Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-29T02:34:46.870478Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:34:46.870545Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:34:46.888102Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:35:13.174428Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-29T02:35:13.174473Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-552840","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-02-29T02:35:13.174536Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T02:35:13.174728Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T02:35:13.331209Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T02:35:13.331267Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-29T02:35:13.331331Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-02-29T02:35:13.333689Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T02:35:13.333814Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T02:35:13.333861Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-552840","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 02:35:50 up  6:18,  0 users,  load average: 1.44, 1.41, 2.00
	Linux functional-552840 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [bdeb88db5e381df3607fe01c5c069c2341cdce05126e9120f7e5563b31f24ba6] <==
	I0229 02:34:45.700308       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0229 02:34:45.704089       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0229 02:34:45.706591       1 main.go:116] setting mtu 1500 for CNI 
	I0229 02:34:45.706668       1 main.go:146] kindnetd IP family: "ipv4"
	I0229 02:34:45.706712       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0229 02:34:49.584541       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:34:49.584655       1 main.go:227] handling current node
	I0229 02:34:59.598152       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:34:59.598178       1 main.go:227] handling current node
	I0229 02:35:09.609332       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:35:09.609437       1 main.go:227] handling current node
	
	
	==> kindnet [f09ca2018856f1e8ed6f18f814150d53d022797d675ca22e78bd16c440b944d6] <==
	I0229 02:35:27.937285       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0229 02:35:27.937500       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0229 02:35:27.937649       1 main.go:116] setting mtu 1500 for CNI 
	I0229 02:35:27.937666       1 main.go:146] kindnetd IP family: "ipv4"
	I0229 02:35:27.937681       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0229 02:35:28.122675       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0229 02:35:28.122924       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0229 02:35:32.076650       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:35:32.076685       1 main.go:227] handling current node
	I0229 02:35:42.090734       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:35:42.090780       1 main.go:227] handling current node
	
	
	==> kube-apiserver [2dd23e64b5a04f491496e32c65545edfaa3cff14278ad1d25a28f5752d73e777] <==
	I0229 02:35:44.998482       1 aggregator.go:164] waiting for initial CRD sync...
	I0229 02:35:44.999005       1 controller.go:116] Starting legacy_token_tracking_controller
	I0229 02:35:44.999086       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0229 02:35:44.999155       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0229 02:35:45.088435       1 available_controller.go:423] Starting AvailableConditionController
	I0229 02:35:45.088554       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0229 02:35:45.088626       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0229 02:35:45.088660       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0229 02:35:45.388874       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 02:35:45.389245       1 aggregator.go:166] initial CRD sync complete...
	I0229 02:35:45.389351       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 02:35:45.389358       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 02:35:45.399215       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 02:35:45.471556       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 02:35:45.488500       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 02:35:45.492137       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 02:35:45.492838       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0229 02:35:45.492861       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0229 02:35:45.493465       1 cache.go:39] Caches are synced for autoregister controller
	I0229 02:35:45.493599       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 02:35:45.498182       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 02:35:45.993165       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0229 02:35:46.350869       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0229 02:35:46.352231       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 02:35:46.369412       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [7439b5d3d896b4ad304abac1a1ff53a83243a7f3deba661c1d2132a286f1803d] <==
	I0229 02:35:41.521705       1 options.go:220] external host was not specified, using 192.168.49.2
	I0229 02:35:41.522782       1 server.go:148] Version: v1.28.4
	I0229 02:35:41.522815       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0229 02:35:41.523117       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-controller-manager [97b4c8e2965da7e5e8f21bc6bd3c5dc250205ac07f5ab0ce697b91990e0a158b] <==
	I0229 02:35:46.395229       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-552840"
	I0229 02:35:46.395294       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0229 02:35:46.395344       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0229 02:35:46.395405       1 taint_manager.go:210] "Sending events to api server"
	I0229 02:35:46.397017       1 event.go:307] "Event occurred" object="functional-552840" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-552840 event: Registered Node functional-552840 in Controller"
	I0229 02:35:46.404659       1 shared_informer.go:318] Caches are synced for HPA
	I0229 02:35:46.409704       1 shared_informer.go:318] Caches are synced for cronjob
	I0229 02:35:46.412642       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-8tbdk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 02:35:46.418763       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-functional-552840" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 02:35:46.420257       1 event.go:307] "Event occurred" object="kube-system/kindnet-jkrdt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 02:35:46.420747       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-2d98k" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 02:35:46.427034       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-functional-552840" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 02:35:46.430864       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 02:35:46.447851       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 02:35:46.462318       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0229 02:35:46.481433       1 shared_informer.go:318] Caches are synced for disruption
	I0229 02:35:46.493067       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 02:35:46.567966       1 shared_informer.go:318] Caches are synced for attach detach
	I0229 02:35:46.760325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="347.05896ms"
	I0229 02:35:46.760511       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.092µs"
	I0229 02:35:46.937582       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 02:35:46.937686       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0229 02:35:46.973699       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 02:35:50.181800       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.054048ms"
	I0229 02:35:50.182797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.748µs"
	
	
	==> kube-controller-manager [a5d58ef036431a8b1372c65a6be540c3a5c6454bd9c26ac1100bed8d4fda8481] <==
	I0229 02:35:01.165430       1 shared_informer.go:318] Caches are synced for service account
	I0229 02:35:01.180615       1 shared_informer.go:318] Caches are synced for TTL
	I0229 02:35:01.187106       1 shared_informer.go:318] Caches are synced for stateful set
	I0229 02:35:01.189398       1 shared_informer.go:318] Caches are synced for endpoint
	I0229 02:35:01.193669       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0229 02:35:01.203484       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0229 02:35:01.211978       1 shared_informer.go:318] Caches are synced for daemon sets
	I0229 02:35:01.214156       1 shared_informer.go:318] Caches are synced for taint
	I0229 02:35:01.214359       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0229 02:35:01.214487       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-552840"
	I0229 02:35:01.214582       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0229 02:35:01.214635       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0229 02:35:01.214697       1 taint_manager.go:210] "Sending events to api server"
	I0229 02:35:01.215033       1 event.go:307] "Event occurred" object="functional-552840" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-552840 event: Registered Node functional-552840 in Controller"
	I0229 02:35:01.239617       1 shared_informer.go:318] Caches are synced for job
	I0229 02:35:01.248823       1 shared_informer.go:318] Caches are synced for attach detach
	I0229 02:35:01.269156       1 shared_informer.go:318] Caches are synced for cronjob
	I0229 02:35:01.272054       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 02:35:01.288809       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0229 02:35:01.296055       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0229 02:35:01.309331       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0229 02:35:01.336732       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 02:35:01.709394       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 02:35:01.709426       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0229 02:35:01.728819       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [1f68da0225b139c0ce0511e6caa30f9c805eed8e92911f0fccf312063e176217] <==
	I0229 02:35:41.657678       1 server_others.go:69] "Using iptables proxy"
	I0229 02:35:41.674278       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0229 02:35:41.707754       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0229 02:35:41.709606       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:35:41.709706       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0229 02:35:41.709740       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0229 02:35:41.709805       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:35:41.710240       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:35:41.710294       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:35:41.711138       1 config.go:188] "Starting service config controller"
	I0229 02:35:41.711390       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:35:41.711425       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:35:41.711430       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:35:41.711962       1 config.go:315] "Starting node config controller"
	I0229 02:35:41.711974       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:35:41.812062       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:35:41.812160       1 shared_informer.go:318] Caches are synced for service config
	I0229 02:35:41.812185       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [a4266bd9b6f329b21ea63a7d11517dd9f756ce2d5ae0176a958e5513622c2f75] <==
	I0229 02:35:29.685312       1 server_others.go:69] "Using iptables proxy"
	I0229 02:35:32.108500       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0229 02:35:32.168544       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0229 02:35:32.170955       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:35:32.171069       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0229 02:35:32.171115       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0229 02:35:32.171175       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:35:32.171411       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:35:32.171655       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:35:32.172545       1 config.go:188] "Starting service config controller"
	I0229 02:35:32.172629       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:35:32.172690       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:35:32.172729       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:35:32.173306       1 config.go:315] "Starting node config controller"
	I0229 02:35:32.173370       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:35:32.273100       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:35:32.274187       1 shared_informer.go:318] Caches are synced for node config
	I0229 02:35:32.274233       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [895d8bbb2ae15cad2c4f4d1421f99d07fd15bd857293b3599a8fc09406054a89] <==
	W0229 02:35:32.025044       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 02:35:32.025125       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 02:35:32.025185       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 02:35:32.070081       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0229 02:35:32.070211       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:35:32.075954       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 02:35:32.076228       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 02:35:32.076456       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:35:32.076271       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 02:35:32.179922       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0229 02:35:45.317696       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	E0229 02:35:45.319375       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E0229 02:35:45.324148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E0229 02:35:45.324299       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0229 02:35:45.324380       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: unknown (get pods)
	E0229 02:35:45.324445       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E0229 02:35:45.324509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: unknown (get nodes)
	E0229 02:35:45.324577       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E0229 02:35:45.324635       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0229 02:35:45.324717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0229 02:35:45.328056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0229 02:35:45.328233       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services)
	E0229 02:35:45.328347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E0229 02:35:45.328401       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E0229 02:35:45.328459       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	
	
	==> kube-scheduler [9ca0a2bfa68742f14f269004a84e16b497170712db0a2560a68ae05ada341959] <==
	I0229 02:34:48.194524       1 serving.go:348] Generated self-signed cert in-memory
	I0229 02:34:50.563635       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0229 02:34:50.563722       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:34:50.568571       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 02:34:50.570480       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0229 02:34:50.570547       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0229 02:34:50.570592       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 02:34:50.572344       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 02:34:50.576057       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:34:50.576104       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0229 02:34:50.576112       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0229 02:34:50.671578       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0229 02:34:50.676849       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0229 02:34:50.676855       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:35:13.170088       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0229 02:35:13.170231       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0229 02:35:13.170488       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 29 02:35:42 functional-552840 kubelet[4739]: E0229 02:35:42.263877    4739 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-552840\": dial tcp 192.168.49.2:8441: connect: connection refused" pod="kube-system/kube-apiserver-functional-552840"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: I0229 02:35:42.263973    4739 scope.go:117] "RemoveContainer" containerID="7439b5d3d896b4ad304abac1a1ff53a83243a7f3deba661c1d2132a286f1803d"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: I0229 02:35:42.277199    4739 status_manager.go:853] "Failed to get status for pod" podUID="1d60fae10f3755c346fdf56ffdeab2a7" pod="kube-system/kube-apiserver-functional-552840" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-552840\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: I0229 02:35:42.294198    4739 status_manager.go:853] "Failed to get status for pod" podUID="98b2078a-3db1-42f2-86d8-4d5502ef246a" pod="kube-system/kube-proxy-2d98k" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-2d98k\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: I0229 02:35:42.294479    4739 status_manager.go:853] "Failed to get status for pod" podUID="1d60fae10f3755c346fdf56ffdeab2a7" pod="kube-system/kube-apiserver-functional-552840" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-552840\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: I0229 02:35:42.294703    4739 status_manager.go:853] "Failed to get status for pod" podUID="c483ab17-00b8-4481-8ee4-310705be977b" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: I0229 02:35:42.294913    4739 status_manager.go:853] "Failed to get status for pod" podUID="98b2078a-3db1-42f2-86d8-4d5502ef246a" pod="kube-system/kube-proxy-2d98k" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-2d98k\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: I0229 02:35:42.301373    4739 status_manager.go:853] "Failed to get status for pod" podUID="72273c924315cc61d2ceae7fdf2436ce" pod="kube-system/kube-scheduler-functional-552840" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-552840\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 02:35:42 functional-552840 kubelet[4739]: E0229 02:35:42.311281    4739 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-552840.17b834d3683d967a", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"598", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-552840", UID:"1d60fae10f3755c346fdf56ffdeab2a7", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Pulled", Message:"Container image \"registry.k8s.io/kube-apiserver:v1.28.4\" already present on ma
chine", Source:v1.EventSource{Component:"kubelet", Host:"functional-552840"}, FirstTimestamp:time.Date(2024, time.February, 29, 2, 35, 41, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 29, 2, 35, 42, 308846349, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-552840"}': 'Patch "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-552840.17b834d3683d967a": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	Feb 29 02:35:43 functional-552840 kubelet[4739]: I0229 02:35:43.294653    4739 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-552840" podUID="96ececed-4269-4263-b61c-a34badde7f99"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: I0229 02:35:44.297847    4739 scope.go:117] "RemoveContainer" containerID="a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: I0229 02:35:44.324480    4739 scope.go:117] "RemoveContainer" containerID="c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: I0229 02:35:44.350187    4739 scope.go:117] "RemoveContainer" containerID="a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: E0229 02:35:44.350653    4739 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee\": container with ID starting with a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee not found: ID does not exist" containerID="a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: I0229 02:35:44.350716    4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee"} err="failed to get container status \"a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee\": rpc error: code = NotFound desc = could not find container \"a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee\": container with ID starting with a0afe963a15ff316218c20699dea4f74471116b7f30ef84999d2b86970bbe7ee not found: ID does not exist"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: I0229 02:35:44.350729    4739 scope.go:117] "RemoveContainer" containerID="c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: E0229 02:35:44.351202    4739 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5\": container with ID starting with c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5 not found: ID does not exist" containerID="c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5"
	Feb 29 02:35:44 functional-552840 kubelet[4739]: I0229 02:35:44.351236    4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5"} err="failed to get container status \"c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5\": rpc error: code = NotFound desc = could not find container \"c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5\": container with ID starting with c5cd49ab6faa6bd0d07595083b20c743c1573d793d273928a5d3a1e58dd7a0a5 not found: ID does not exist"
	Feb 29 02:35:45 functional-552840 kubelet[4739]: E0229 02:35:45.226476    4739 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 29 02:35:45 functional-552840 kubelet[4739]: E0229 02:35:45.232365    4739 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 29 02:35:45 functional-552840 kubelet[4739]: E0229 02:35:45.232693    4739 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 29 02:35:45 functional-552840 kubelet[4739]: I0229 02:35:45.415326    4739 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-552840"
	Feb 29 02:35:46 functional-552840 kubelet[4739]: I0229 02:35:46.117631    4739 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="94b23b4fa78b4e3ee66351d974ffe0be" path="/var/lib/kubelet/pods/94b23b4fa78b4e3ee66351d974ffe0be/volumes"
	Feb 29 02:35:46 functional-552840 kubelet[4739]: I0229 02:35:46.306461    4739 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-552840" podUID="96ececed-4269-4263-b61c-a34badde7f99"
	Feb 29 02:35:49 functional-552840 kubelet[4739]: I0229 02:35:49.338872    4739 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-552840" podStartSLOduration=4.338813411 podCreationTimestamp="2024-02-29 02:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-29 02:35:48.518021043 +0000 UTC m=+8.573775783" watchObservedRunningTime="2024-02-29 02:35:49.338813411 +0000 UTC m=+9.394568151"
	
	
	==> storage-provisioner [cac1e01798cbb742c5bbeadcda9fd910bcf270eb81e50eb6772e434e5f489fea] <==
	I0229 02:35:41.549654       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 02:35:41.575491       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 02:35:41.575649       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0229 02:35:45.039976       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [d1a6a3ea05682fa3b61b5ee196feadd6daaafc81fe0b16960f90224cf4c6398e] <==
	I0229 02:35:28.051920       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0229 02:35:28.053364       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:35:49.514007 1175480 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18063-1148303/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-552840 -n functional-552840
helpers_test.go:261: (dbg) Run:  kubectl --context functional-552840 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ComponentHealth (2.71s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (182.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-080946 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0229 02:39:04.196389 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-080946 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.712809155s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-080946 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-080946 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [493c25c7-5a8d-4679-be25-5cda9e4b22c9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [493c25c7-5a8d-4679-be25-5cda9e4b22c9] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.003657819s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-080946 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0229 02:41:01.529736 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 02:41:01.535005 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 02:41:01.545312 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 02:41:01.565603 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 02:41:01.605846 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 02:41:01.686133 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 02:41:01.846548 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 02:41:02.167085 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 02:41:02.807869 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 02:41:04.088292 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 02:41:06.648911 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 02:41:11.770139 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 02:41:22.011038 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-080946 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.988442097s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-080946 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-080946 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0229 02:41:42.491269 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.030315108s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-080946 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-080946 addons disable ingress-dns --alsologtostderr -v=1: (2.30223993s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-080946 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-080946 addons disable ingress --alsologtostderr -v=1: (7.513750273s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-080946
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-080946:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "945f401f452a2c99bb031ace88ea71566049be86fde16f6019ebca5ddeea5f1e",
	        "Created": "2024-02-29T02:37:32.254532995Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182276,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T02:37:32.541567212Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/945f401f452a2c99bb031ace88ea71566049be86fde16f6019ebca5ddeea5f1e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/945f401f452a2c99bb031ace88ea71566049be86fde16f6019ebca5ddeea5f1e/hostname",
	        "HostsPath": "/var/lib/docker/containers/945f401f452a2c99bb031ace88ea71566049be86fde16f6019ebca5ddeea5f1e/hosts",
	        "LogPath": "/var/lib/docker/containers/945f401f452a2c99bb031ace88ea71566049be86fde16f6019ebca5ddeea5f1e/945f401f452a2c99bb031ace88ea71566049be86fde16f6019ebca5ddeea5f1e-json.log",
	        "Name": "/ingress-addon-legacy-080946",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-080946:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-080946",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/21758a5995a4ffffe7b5903fdd23932665d7971defc91f91192e6aa4e0e9690c-init/diff:/var/lib/docker/overlay2/330c2f3296cde464d6c1a52ceb432efd04754f92c402ca5b9f20e3ccc2c40d71/diff",
	                "MergedDir": "/var/lib/docker/overlay2/21758a5995a4ffffe7b5903fdd23932665d7971defc91f91192e6aa4e0e9690c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/21758a5995a4ffffe7b5903fdd23932665d7971defc91f91192e6aa4e0e9690c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/21758a5995a4ffffe7b5903fdd23932665d7971defc91f91192e6aa4e0e9690c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-080946",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-080946/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-080946",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-080946",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-080946",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9e36654ecb2ed0501ec54e39ffe01274d2c787e7e966b17fe2fce7314893b090",
	            "SandboxKey": "/var/run/docker/netns/9e36654ecb2e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34052"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34051"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34048"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34049"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-080946": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "945f401f452a",
	                        "ingress-addon-legacy-080946"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "39cd1fe2c634536edad92f3d9d7751e6e03887aadab3f942eb151433d1b482db",
	                    "EndpointID": "03621656785eeaab9a2851c12fbcf5d305e0576bff62f8d0c3fb7960969ac0db",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-080946",
	                        "945f401f452a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-080946 -n ingress-addon-legacy-080946
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-080946 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-080946 logs -n 25: (1.504021538s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-552840                                                      | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:36 UTC | 29 Feb 24 02:36 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-552840                                                      | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:36 UTC | 29 Feb 24 02:36 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-552840                                                      | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:36 UTC | 29 Feb 24 02:36 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-552840 image ls                                             | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:36 UTC | 29 Feb 24 02:36 UTC |
	| image          | functional-552840 image save                                           | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:36 UTC | 29 Feb 24 02:36 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-552840               |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-552840 image rm                                             | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:36 UTC | 29 Feb 24 02:36 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-552840               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-552840 image ls                                             | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:36 UTC | 29 Feb 24 02:37 UTC |
	| image          | functional-552840 image load                                           | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:37 UTC | 29 Feb 24 02:37 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-552840 image ls                                             | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:37 UTC | 29 Feb 24 02:37 UTC |
	| image          | functional-552840 image save --daemon                                  | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:37 UTC | 29 Feb 24 02:37 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-552840               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-552840                                                      | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:37 UTC | 29 Feb 24 02:37 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-552840                                                      | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:37 UTC | 29 Feb 24 02:37 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-552840 ssh pgrep                                            | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:37 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-552840                                                      | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:37 UTC | 29 Feb 24 02:37 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-552840                                                      | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:37 UTC | 29 Feb 24 02:37 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-552840 image build -t                                       | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:37 UTC | 29 Feb 24 02:37 UTC |
	|                | localhost/my-image:functional-552840                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-552840 image ls                                             | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:37 UTC | 29 Feb 24 02:37 UTC |
	| delete         | -p functional-552840                                                   | functional-552840           | jenkins | v1.32.0 | 29 Feb 24 02:37 UTC | 29 Feb 24 02:37 UTC |
	| start          | -p ingress-addon-legacy-080946                                         | ingress-addon-legacy-080946 | jenkins | v1.32.0 | 29 Feb 24 02:37 UTC | 29 Feb 24 02:38 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-080946                                            | ingress-addon-legacy-080946 | jenkins | v1.32.0 | 29 Feb 24 02:38 UTC | 29 Feb 24 02:38 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-080946                                            | ingress-addon-legacy-080946 | jenkins | v1.32.0 | 29 Feb 24 02:38 UTC | 29 Feb 24 02:38 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-080946                                            | ingress-addon-legacy-080946 | jenkins | v1.32.0 | 29 Feb 24 02:39 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-080946 ip                                         | ingress-addon-legacy-080946 | jenkins | v1.32.0 | 29 Feb 24 02:41 UTC | 29 Feb 24 02:41 UTC |
	| addons         | ingress-addon-legacy-080946                                            | ingress-addon-legacy-080946 | jenkins | v1.32.0 | 29 Feb 24 02:41 UTC | 29 Feb 24 02:41 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-080946                                            | ingress-addon-legacy-080946 | jenkins | v1.32.0 | 29 Feb 24 02:41 UTC | 29 Feb 24 02:41 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:37:08
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:37:08.720774 1181813 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:37:08.720916 1181813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:37:08.720928 1181813 out.go:304] Setting ErrFile to fd 2...
	I0229 02:37:08.720933 1181813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:37:08.721196 1181813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
	I0229 02:37:08.721626 1181813 out.go:298] Setting JSON to false
	I0229 02:37:08.722480 1181813 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22775,"bootTime":1709151454,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0229 02:37:08.722552 1181813 start.go:139] virtualization:  
	I0229 02:37:08.725392 1181813 out.go:177] * [ingress-addon-legacy-080946] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0229 02:37:08.727725 1181813 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:37:08.729383 1181813 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:37:08.727844 1181813 notify.go:220] Checking for updates...
	I0229 02:37:08.733071 1181813 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	I0229 02:37:08.735033 1181813 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	I0229 02:37:08.736975 1181813 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0229 02:37:08.739211 1181813 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:37:08.741381 1181813 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:37:08.761898 1181813 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0229 02:37:08.762051 1181813 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:37:08.840949 1181813 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:47 SystemTime:2024-02-29 02:37:08.832162711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:37:08.841053 1181813 docker.go:295] overlay module found
	I0229 02:37:08.843475 1181813 out.go:177] * Using the docker driver based on user configuration
	I0229 02:37:08.845245 1181813 start.go:299] selected driver: docker
	I0229 02:37:08.845280 1181813 start.go:903] validating driver "docker" against <nil>
	I0229 02:37:08.845294 1181813 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:37:08.846017 1181813 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:37:08.908376 1181813 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:47 SystemTime:2024-02-29 02:37:08.899086147 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:37:08.908546 1181813 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 02:37:08.908779 1181813 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:37:08.910713 1181813 out.go:177] * Using Docker driver with root privileges
	I0229 02:37:08.912502 1181813 cni.go:84] Creating CNI manager for ""
	I0229 02:37:08.912525 1181813 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0229 02:37:08.912534 1181813 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0229 02:37:08.912547 1181813 start_flags.go:323] config:
	{Name:ingress-addon-legacy-080946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-080946 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:37:08.916190 1181813 out.go:177] * Starting control plane node ingress-addon-legacy-080946 in cluster ingress-addon-legacy-080946
	I0229 02:37:08.918111 1181813 cache.go:121] Beginning downloading kic base image for docker with crio
	I0229 02:37:08.920134 1181813 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 02:37:08.922042 1181813 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0229 02:37:08.922076 1181813 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 02:37:08.936862 1181813 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 02:37:08.936889 1181813 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 02:37:08.990275 1181813 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0229 02:37:08.990310 1181813 cache.go:56] Caching tarball of preloaded images
	I0229 02:37:08.990506 1181813 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0229 02:37:08.994374 1181813 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0229 02:37:08.996145 1181813 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0229 02:37:09.104362 1181813 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0229 02:37:24.508336 1181813 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0229 02:37:24.508459 1181813 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0229 02:37:25.734658 1181813 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0229 02:37:25.735039 1181813 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/config.json ...
	I0229 02:37:25.735074 1181813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/config.json: {Name:mk589792b0d31ca71c1e652320ad5c1cc4e74cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:37:25.735289 1181813 cache.go:194] Successfully downloaded all kic artifacts
	I0229 02:37:25.735319 1181813 start.go:365] acquiring machines lock for ingress-addon-legacy-080946: {Name:mk2fa65d8798a4ce46115540536b3ba33eab79a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:37:25.735387 1181813 start.go:369] acquired machines lock for "ingress-addon-legacy-080946" in 53.415µs
	I0229 02:37:25.735419 1181813 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-080946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-080946 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:37:25.735504 1181813 start.go:125] createHost starting for "" (driver="docker")
	I0229 02:37:25.737560 1181813 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0229 02:37:25.737785 1181813 start.go:159] libmachine.API.Create for "ingress-addon-legacy-080946" (driver="docker")
	I0229 02:37:25.737814 1181813 client.go:168] LocalClient.Create starting
	I0229 02:37:25.737889 1181813 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem
	I0229 02:37:25.737926 1181813 main.go:141] libmachine: Decoding PEM data...
	I0229 02:37:25.737945 1181813 main.go:141] libmachine: Parsing certificate...
	I0229 02:37:25.737999 1181813 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/cert.pem
	I0229 02:37:25.738022 1181813 main.go:141] libmachine: Decoding PEM data...
	I0229 02:37:25.738036 1181813 main.go:141] libmachine: Parsing certificate...
	I0229 02:37:25.738407 1181813 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-080946 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 02:37:25.753253 1181813 cli_runner.go:211] docker network inspect ingress-addon-legacy-080946 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 02:37:25.753345 1181813 network_create.go:281] running [docker network inspect ingress-addon-legacy-080946] to gather additional debugging logs...
	I0229 02:37:25.753362 1181813 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-080946
	W0229 02:37:25.768643 1181813 cli_runner.go:211] docker network inspect ingress-addon-legacy-080946 returned with exit code 1
	I0229 02:37:25.768674 1181813 network_create.go:284] error running [docker network inspect ingress-addon-legacy-080946]: docker network inspect ingress-addon-legacy-080946: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-080946 not found
	I0229 02:37:25.768689 1181813 network_create.go:286] output of [docker network inspect ingress-addon-legacy-080946]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-080946 not found
	
	** /stderr **
	I0229 02:37:25.768837 1181813 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 02:37:25.784155 1181813 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000548290}
	I0229 02:37:25.784200 1181813 network_create.go:124] attempt to create docker network ingress-addon-legacy-080946 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0229 02:37:25.784258 1181813 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-080946 ingress-addon-legacy-080946
	I0229 02:37:25.849746 1181813 network_create.go:108] docker network ingress-addon-legacy-080946 192.168.49.0/24 created
	I0229 02:37:25.849782 1181813 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-080946" container
	I0229 02:37:25.849854 1181813 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 02:37:25.865136 1181813 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-080946 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-080946 --label created_by.minikube.sigs.k8s.io=true
	I0229 02:37:25.881385 1181813 oci.go:103] Successfully created a docker volume ingress-addon-legacy-080946
	I0229 02:37:25.881475 1181813 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-080946-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-080946 --entrypoint /usr/bin/test -v ingress-addon-legacy-080946:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 02:37:27.392891 1181813 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-080946-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-080946 --entrypoint /usr/bin/test -v ingress-addon-legacy-080946:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (1.511364046s)
	I0229 02:37:27.392922 1181813 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-080946
	I0229 02:37:27.392941 1181813 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0229 02:37:27.392961 1181813 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 02:37:27.393047 1181813 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-080946:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0229 02:37:32.181507 1181813 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-080946:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (4.788422024s)
	I0229 02:37:32.181542 1181813 kic.go:203] duration metric: took 4.788578 seconds to extract preloaded images to volume
	W0229 02:37:32.181715 1181813 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0229 02:37:32.181831 1181813 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0229 02:37:32.240696 1181813 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-080946 --name ingress-addon-legacy-080946 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-080946 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-080946 --network ingress-addon-legacy-080946 --ip 192.168.49.2 --volume ingress-addon-legacy-080946:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0229 02:37:32.552540 1181813 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-080946 --format={{.State.Running}}
	I0229 02:37:32.580232 1181813 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-080946 --format={{.State.Status}}
	I0229 02:37:32.601318 1181813 cli_runner.go:164] Run: docker exec ingress-addon-legacy-080946 stat /var/lib/dpkg/alternatives/iptables
	I0229 02:37:32.658378 1181813 oci.go:144] the created container "ingress-addon-legacy-080946" has a running status.
	I0229 02:37:32.658421 1181813 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/ingress-addon-legacy-080946/id_rsa...
	I0229 02:37:34.006818 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/ingress-addon-legacy-080946/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0229 02:37:34.006874 1181813 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/ingress-addon-legacy-080946/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0229 02:37:34.026065 1181813 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-080946 --format={{.State.Status}}
	I0229 02:37:34.046552 1181813 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0229 02:37:34.046575 1181813 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-080946 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0229 02:37:34.101081 1181813 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-080946 --format={{.State.Status}}
	I0229 02:37:34.117762 1181813 machine.go:88] provisioning docker machine ...
	I0229 02:37:34.117804 1181813 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-080946"
	I0229 02:37:34.117874 1181813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-080946
	I0229 02:37:34.134015 1181813 main.go:141] libmachine: Using SSH client type: native
	I0229 02:37:34.134302 1181813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 34052 <nil> <nil>}
	I0229 02:37:34.134322 1181813 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-080946 && echo "ingress-addon-legacy-080946" | sudo tee /etc/hostname
	I0229 02:37:34.275506 1181813 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-080946
	
	I0229 02:37:34.275593 1181813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-080946
	I0229 02:37:34.291910 1181813 main.go:141] libmachine: Using SSH client type: native
	I0229 02:37:34.292177 1181813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 34052 <nil> <nil>}
	I0229 02:37:34.292202 1181813 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-080946' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-080946/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-080946' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:37:34.420021 1181813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:37:34.420048 1181813 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18063-1148303/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-1148303/.minikube}
	I0229 02:37:34.420079 1181813 ubuntu.go:177] setting up certificates
	I0229 02:37:34.420102 1181813 provision.go:83] configureAuth start
	I0229 02:37:34.420184 1181813 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-080946
	I0229 02:37:34.435734 1181813 provision.go:138] copyHostCerts
	I0229 02:37:34.435777 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.pem
	I0229 02:37:34.435806 1181813 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.pem, removing ...
	I0229 02:37:34.435817 1181813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.pem
	I0229 02:37:34.435895 1181813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.pem (1082 bytes)
	I0229 02:37:34.435979 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18063-1148303/.minikube/cert.pem
	I0229 02:37:34.436020 1181813 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-1148303/.minikube/cert.pem, removing ...
	I0229 02:37:34.436025 1181813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-1148303/.minikube/cert.pem
	I0229 02:37:34.436054 1181813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-1148303/.minikube/cert.pem (1123 bytes)
	I0229 02:37:34.436097 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18063-1148303/.minikube/key.pem
	I0229 02:37:34.436114 1181813 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-1148303/.minikube/key.pem, removing ...
	I0229 02:37:34.436125 1181813 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-1148303/.minikube/key.pem
	I0229 02:37:34.436152 1181813 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-1148303/.minikube/key.pem (1675 bytes)
	I0229 02:37:34.436203 1181813 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-080946 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-080946]
	I0229 02:37:34.775728 1181813 provision.go:172] copyRemoteCerts
	I0229 02:37:34.775827 1181813 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:37:34.775888 1181813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-080946
	I0229 02:37:34.793982 1181813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34052 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/ingress-addon-legacy-080946/id_rsa Username:docker}
	I0229 02:37:34.884538 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0229 02:37:34.884603 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:37:34.908338 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0229 02:37:34.908399 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 02:37:34.931368 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0229 02:37:34.931437 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 02:37:34.954211 1181813 provision.go:86] duration metric: configureAuth took 534.087882ms
	I0229 02:37:34.954239 1181813 ubuntu.go:193] setting minikube options for container-runtime
	I0229 02:37:34.954427 1181813 config.go:182] Loaded profile config "ingress-addon-legacy-080946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 02:37:34.954546 1181813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-080946
	I0229 02:37:34.971137 1181813 main.go:141] libmachine: Using SSH client type: native
	I0229 02:37:34.971388 1181813 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 34052 <nil> <nil>}
	I0229 02:37:34.971409 1181813 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:37:35.236575 1181813 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:37:35.236603 1181813 machine.go:91] provisioned docker machine in 1.118814811s
	I0229 02:37:35.236613 1181813 client.go:171] LocalClient.Create took 9.498788407s
	I0229 02:37:35.236626 1181813 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-080946" took 9.498841453s
	I0229 02:37:35.236648 1181813 start.go:300] post-start starting for "ingress-addon-legacy-080946" (driver="docker")
	I0229 02:37:35.236663 1181813 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:37:35.236733 1181813 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:37:35.236781 1181813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-080946
	I0229 02:37:35.252113 1181813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34052 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/ingress-addon-legacy-080946/id_rsa Username:docker}
	I0229 02:37:35.345044 1181813 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:37:35.348177 1181813 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0229 02:37:35.348219 1181813 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0229 02:37:35.348232 1181813 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0229 02:37:35.348240 1181813 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0229 02:37:35.348251 1181813 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-1148303/.minikube/addons for local assets ...
	I0229 02:37:35.348318 1181813 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-1148303/.minikube/files for local assets ...
	I0229 02:37:35.348429 1181813 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/11536582.pem -> 11536582.pem in /etc/ssl/certs
	I0229 02:37:35.348442 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/11536582.pem -> /etc/ssl/certs/11536582.pem
	I0229 02:37:35.348567 1181813 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:37:35.357300 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/11536582.pem --> /etc/ssl/certs/11536582.pem (1708 bytes)
	I0229 02:37:35.382689 1181813 start.go:303] post-start completed in 146.019287ms
	I0229 02:37:35.383139 1181813 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-080946
	I0229 02:37:35.398658 1181813 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/config.json ...
	I0229 02:37:35.398951 1181813 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 02:37:35.399001 1181813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-080946
	I0229 02:37:35.414030 1181813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34052 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/ingress-addon-legacy-080946/id_rsa Username:docker}
	I0229 02:37:35.504473 1181813 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 02:37:35.508701 1181813 start.go:128] duration metric: createHost completed in 9.773180243s
	I0229 02:37:35.508767 1181813 start.go:83] releasing machines lock for "ingress-addon-legacy-080946", held for 9.773359484s
	I0229 02:37:35.508860 1181813 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-080946
	I0229 02:37:35.524714 1181813 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:37:35.524810 1181813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-080946
	I0229 02:37:35.524714 1181813 ssh_runner.go:195] Run: cat /version.json
	I0229 02:37:35.524857 1181813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-080946
	I0229 02:37:35.548155 1181813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34052 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/ingress-addon-legacy-080946/id_rsa Username:docker}
	I0229 02:37:35.548580 1181813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34052 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/ingress-addon-legacy-080946/id_rsa Username:docker}
	I0229 02:37:35.748923 1181813 ssh_runner.go:195] Run: systemctl --version
	I0229 02:37:35.753122 1181813 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:37:35.893925 1181813 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 02:37:35.898106 1181813 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:37:35.920278 1181813 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0229 02:37:35.920354 1181813 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:37:35.952817 1181813 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0229 02:37:35.952884 1181813 start.go:475] detecting cgroup driver to use...
	I0229 02:37:35.952931 1181813 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 02:37:35.953025 1181813 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:37:35.968210 1181813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:37:35.978844 1181813 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:37:35.978940 1181813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:37:35.991822 1181813 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:37:36.008315 1181813 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:37:36.103999 1181813 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:37:36.205967 1181813 docker.go:233] disabling docker service ...
	I0229 02:37:36.206075 1181813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:37:36.225798 1181813 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:37:36.238016 1181813 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:37:36.327767 1181813 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:37:36.421420 1181813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:37:36.432350 1181813 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:37:36.448121 1181813 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0229 02:37:36.448184 1181813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:37:36.457339 1181813 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:37:36.457414 1181813 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:37:36.468866 1181813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:37:36.479152 1181813 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:37:36.489965 1181813 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:37:36.499699 1181813 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:37:36.508610 1181813 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:37:36.517062 1181813 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:37:36.604379 1181813 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:37:36.717486 1181813 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:37:36.717603 1181813 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:37:36.721703 1181813 start.go:543] Will wait 60s for crictl version
	I0229 02:37:36.721801 1181813 ssh_runner.go:195] Run: which crictl
	I0229 02:37:36.725027 1181813 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:37:36.767794 1181813 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0229 02:37:36.767911 1181813 ssh_runner.go:195] Run: crio --version
	I0229 02:37:36.810139 1181813 ssh_runner.go:195] Run: crio --version
	I0229 02:37:36.851524 1181813 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0229 02:37:36.853299 1181813 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-080946 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 02:37:36.867909 1181813 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0229 02:37:36.871461 1181813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:37:36.881723 1181813 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0229 02:37:36.881806 1181813 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:37:36.926789 1181813 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0229 02:37:36.926865 1181813 ssh_runner.go:195] Run: which lz4
	I0229 02:37:36.930301 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0229 02:37:36.930399 1181813 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:37:36.933523 1181813 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:37:36.933558 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0229 02:37:39.252243 1181813 crio.go:444] Took 2.321876 seconds to copy over tarball
	I0229 02:37:39.252330 1181813 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:37:42.212581 1181813 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.960210684s)
	I0229 02:37:42.212612 1181813 crio.go:451] Took 2.960331 seconds to extract the tarball
	I0229 02:37:42.212625 1181813 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:37:42.417568 1181813 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:37:42.454321 1181813 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0229 02:37:42.454348 1181813 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:37:42.454459 1181813 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:37:42.454677 1181813 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 02:37:42.454789 1181813 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 02:37:42.454883 1181813 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 02:37:42.454985 1181813 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 02:37:42.455090 1181813 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0229 02:37:42.455244 1181813 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0229 02:37:42.455346 1181813 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0229 02:37:42.456625 1181813 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0229 02:37:42.457046 1181813 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 02:37:42.457193 1181813 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 02:37:42.457305 1181813 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0229 02:37:42.457402 1181813 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:37:42.457978 1181813 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 02:37:42.458322 1181813 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 02:37:42.458515 1181813 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	W0229 02:37:42.811340 1181813 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0229 02:37:42.811540 1181813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0229 02:37:42.822456 1181813 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0229 02:37:42.822688 1181813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0229 02:37:42.826174 1181813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0229 02:37:42.831407 1181813 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0229 02:37:42.831627 1181813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0229 02:37:42.833027 1181813 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0229 02:37:42.833280 1181813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0229 02:37:42.841879 1181813 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0229 02:37:42.842101 1181813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0229 02:37:42.859811 1181813 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0229 02:37:42.860034 1181813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0229 02:37:42.932443 1181813 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0229 02:37:42.932499 1181813 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 02:37:42.932561 1181813 ssh_runner.go:195] Run: which crictl
	W0229 02:37:42.991291 1181813 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0229 02:37:42.991469 1181813 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:37:43.005378 1181813 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0229 02:37:43.005426 1181813 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 02:37:43.005486 1181813 ssh_runner.go:195] Run: which crictl
	I0229 02:37:43.009639 1181813 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0229 02:37:43.009690 1181813 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0229 02:37:43.009748 1181813 ssh_runner.go:195] Run: which crictl
	I0229 02:37:43.009860 1181813 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0229 02:37:43.009898 1181813 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0229 02:37:43.009929 1181813 ssh_runner.go:195] Run: which crictl
	I0229 02:37:43.010008 1181813 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0229 02:37:43.010030 1181813 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 02:37:43.010062 1181813 ssh_runner.go:195] Run: which crictl
	I0229 02:37:43.029519 1181813 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0229 02:37:43.029572 1181813 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 02:37:43.029619 1181813 ssh_runner.go:195] Run: which crictl
	I0229 02:37:43.029729 1181813 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0229 02:37:43.029756 1181813 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0229 02:37:43.029790 1181813 ssh_runner.go:195] Run: which crictl
	I0229 02:37:43.029877 1181813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 02:37:43.181516 1181813 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0229 02:37:43.181626 1181813 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:37:43.181682 1181813 ssh_runner.go:195] Run: which crictl
	I0229 02:37:43.181781 1181813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0229 02:37:43.181834 1181813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0229 02:37:43.181886 1181813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0229 02:37:43.181792 1181813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0229 02:37:43.181969 1181813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0229 02:37:43.182036 1181813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0229 02:37:43.182044 1181813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0229 02:37:43.324581 1181813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0229 02:37:43.324695 1181813 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:37:43.324769 1181813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0229 02:37:43.324832 1181813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0229 02:37:43.324876 1181813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0229 02:37:43.324911 1181813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0229 02:37:43.324948 1181813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0229 02:37:43.381488 1181813 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0229 02:37:43.381589 1181813 cache_images.go:92] LoadImages completed in 927.22321ms
	W0229 02:37:43.381677 1181813 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I0229 02:37:43.381759 1181813 ssh_runner.go:195] Run: crio config
	I0229 02:37:43.455916 1181813 cni.go:84] Creating CNI manager for ""
	I0229 02:37:43.456006 1181813 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0229 02:37:43.456056 1181813 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:37:43.456090 1181813 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-080946 NodeName:ingress-addon-legacy-080946 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 02:37:43.456238 1181813 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-080946"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:37:43.456317 1181813 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-080946 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-080946 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:37:43.456388 1181813 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0229 02:37:43.464657 1181813 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:37:43.464726 1181813 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:37:43.473233 1181813 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0229 02:37:43.491013 1181813 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0229 02:37:43.508665 1181813 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0229 02:37:43.526423 1181813 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0229 02:37:43.529789 1181813 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:37:43.541009 1181813 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946 for IP: 192.168.49.2
	I0229 02:37:43.541098 1181813 certs.go:190] acquiring lock for shared ca certs: {Name:mk629bf08f2bf9bf9dfe188d027237a0e3bc8e94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:37:43.541288 1181813 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.key
	I0229 02:37:43.541339 1181813 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.key
	I0229 02:37:43.541408 1181813 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.key
	I0229 02:37:43.541432 1181813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt with IP's: []
	I0229 02:37:43.729735 1181813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt ...
	I0229 02:37:43.729767 1181813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: {Name:mka161bf409d3f71db351a893657e5ccb5d64da9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:37:43.729964 1181813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.key ...
	I0229 02:37:43.729979 1181813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.key: {Name:mk480f9f99368a4a2a562ff9f4d4fb619f594c1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:37:43.730088 1181813 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/apiserver.key.dd3b5fb2
	I0229 02:37:43.730105 1181813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 02:37:44.074191 1181813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/apiserver.crt.dd3b5fb2 ...
	I0229 02:37:44.074223 1181813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/apiserver.crt.dd3b5fb2: {Name:mkc3616169b314181b0d73c35df13af858405d1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:37:44.074414 1181813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/apiserver.key.dd3b5fb2 ...
	I0229 02:37:44.074429 1181813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/apiserver.key.dd3b5fb2: {Name:mk7b507a4f9270559d5408af434e323469b03e3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:37:44.074514 1181813 certs.go:337] copying /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/apiserver.crt
	I0229 02:37:44.074598 1181813 certs.go:341] copying /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/apiserver.key
	I0229 02:37:44.074666 1181813 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/proxy-client.key
	I0229 02:37:44.074688 1181813 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/proxy-client.crt with IP's: []
	I0229 02:37:44.490747 1181813 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/proxy-client.crt ...
	I0229 02:37:44.490781 1181813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/proxy-client.crt: {Name:mk5899ee066a341133efb51a3aa0601eac7f1ca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:37:44.490970 1181813 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/proxy-client.key ...
	I0229 02:37:44.490985 1181813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/proxy-client.key: {Name:mka0fc47029160c507474ba2d2503599066282cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:37:44.491075 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 02:37:44.491095 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 02:37:44.491110 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 02:37:44.491137 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 02:37:44.491152 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 02:37:44.491166 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0229 02:37:44.491180 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 02:37:44.491190 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 02:37:44.491246 1181813 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/1153658.pem (1338 bytes)
	W0229 02:37:44.491291 1181813 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/1153658_empty.pem, impossibly tiny 0 bytes
	I0229 02:37:44.491304 1181813 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 02:37:44.491333 1181813 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:37:44.491360 1181813 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:37:44.491388 1181813 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/certs/key.pem (1675 bytes)
	I0229 02:37:44.491432 1181813 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/11536582.pem (1708 bytes)
	I0229 02:37:44.491467 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/11536582.pem -> /usr/share/ca-certificates/11536582.pem
	I0229 02:37:44.491483 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:37:44.491493 1181813 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/1153658.pem -> /usr/share/ca-certificates/1153658.pem
	I0229 02:37:44.492086 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:37:44.515538 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:37:44.540052 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:37:44.563078 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 02:37:44.587360 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:37:44.610456 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 02:37:44.633843 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:37:44.657170 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 02:37:44.680514 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/ssl/certs/11536582.pem --> /usr/share/ca-certificates/11536582.pem (1708 bytes)
	I0229 02:37:44.704066 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:37:44.727579 1181813 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-1148303/.minikube/certs/1153658.pem --> /usr/share/ca-certificates/1153658.pem (1338 bytes)
	I0229 02:37:44.751185 1181813 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:37:44.768742 1181813 ssh_runner.go:195] Run: openssl version
	I0229 02:37:44.774333 1181813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11536582.pem && ln -fs /usr/share/ca-certificates/11536582.pem /etc/ssl/certs/11536582.pem"
	I0229 02:37:44.783436 1181813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11536582.pem
	I0229 02:37:44.786741 1181813 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 02:33 /usr/share/ca-certificates/11536582.pem
	I0229 02:37:44.786832 1181813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11536582.pem
	I0229 02:37:44.793527 1181813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11536582.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:37:44.802813 1181813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:37:44.811837 1181813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:37:44.815232 1181813 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:37:44.815300 1181813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:37:44.822045 1181813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:37:44.831352 1181813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1153658.pem && ln -fs /usr/share/ca-certificates/1153658.pem /etc/ssl/certs/1153658.pem"
	I0229 02:37:44.840727 1181813 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1153658.pem
	I0229 02:37:44.844154 1181813 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 02:33 /usr/share/ca-certificates/1153658.pem
	I0229 02:37:44.844214 1181813 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1153658.pem
	I0229 02:37:44.850889 1181813 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1153658.pem /etc/ssl/certs/51391683.0"
	I0229 02:37:44.860028 1181813 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:37:44.862996 1181813 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:37:44.863048 1181813 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-080946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-080946 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:37:44.863149 1181813 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:37:44.863203 1181813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:37:44.899308 1181813 cri.go:89] found id: ""
	I0229 02:37:44.899383 1181813 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:37:44.908355 1181813 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:37:44.917446 1181813 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0229 02:37:44.917536 1181813 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:37:44.926274 1181813 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:37:44.926317 1181813 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0229 02:37:44.977333 1181813 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 02:37:44.977392 1181813 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:37:45.078868 1181813 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0229 02:37:45.079002 1181813 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1055-aws
	I0229 02:37:45.079069 1181813 kubeadm.go:322] OS: Linux
	I0229 02:37:45.079156 1181813 kubeadm.go:322] CGROUPS_CPU: enabled
	I0229 02:37:45.079237 1181813 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0229 02:37:45.079315 1181813 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0229 02:37:45.079389 1181813 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0229 02:37:45.079467 1181813 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0229 02:37:45.079542 1181813 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0229 02:37:45.194232 1181813 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:37:45.194422 1181813 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:37:45.194551 1181813 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:37:45.466404 1181813 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:37:45.468410 1181813 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:37:45.468605 1181813 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:37:45.576361 1181813 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:37:45.580104 1181813 out.go:204]   - Generating certificates and keys ...
	I0229 02:37:45.580302 1181813 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:37:45.580433 1181813 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:37:46.090875 1181813 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 02:37:46.961926 1181813 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 02:37:48.104796 1181813 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 02:37:48.610201 1181813 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 02:37:48.897250 1181813 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 02:37:48.897624 1181813 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-080946 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0229 02:37:49.578798 1181813 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 02:37:49.579160 1181813 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-080946 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0229 02:37:50.839093 1181813 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 02:37:52.346715 1181813 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 02:37:52.975270 1181813 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 02:37:52.975575 1181813 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:37:53.154686 1181813 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:37:53.807523 1181813 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:37:54.446412 1181813 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:37:54.654164 1181813 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:37:54.654907 1181813 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:37:54.657007 1181813 out.go:204]   - Booting up control plane ...
	I0229 02:37:54.657106 1181813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:37:54.664244 1181813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:37:54.670191 1181813 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:37:54.671830 1181813 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:37:54.674787 1181813 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:38:07.677687 1181813 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.002450 seconds
	I0229 02:38:07.677805 1181813 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:38:07.692696 1181813 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:38:08.218185 1181813 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:38:08.218335 1181813 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-080946 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0229 02:38:08.726048 1181813 kubeadm.go:322] [bootstrap-token] Using token: o4gudc.kt5xj0na7n1b1bo1
	I0229 02:38:08.727798 1181813 out.go:204]   - Configuring RBAC rules ...
	I0229 02:38:08.727933 1181813 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:38:08.732381 1181813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:38:08.744073 1181813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:38:08.747530 1181813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:38:08.752782 1181813 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:38:08.760637 1181813 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:38:08.774924 1181813 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:38:09.115786 1181813 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:38:09.163682 1181813 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:38:09.165470 1181813 kubeadm.go:322] 
	I0229 02:38:09.165560 1181813 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:38:09.165573 1181813 kubeadm.go:322] 
	I0229 02:38:09.165670 1181813 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:38:09.165682 1181813 kubeadm.go:322] 
	I0229 02:38:09.165710 1181813 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:38:09.165779 1181813 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:38:09.165853 1181813 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:38:09.165865 1181813 kubeadm.go:322] 
	I0229 02:38:09.165924 1181813 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:38:09.166024 1181813 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:38:09.166108 1181813 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:38:09.166116 1181813 kubeadm.go:322] 
	I0229 02:38:09.166217 1181813 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:38:09.166306 1181813 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:38:09.166317 1181813 kubeadm.go:322] 
	I0229 02:38:09.166402 1181813 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token o4gudc.kt5xj0na7n1b1bo1 \
	I0229 02:38:09.166524 1181813 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0eed3bb06de93eaacfde26833aa0934eb72e0c80231d6eec065ff79fcf497e29 \
	I0229 02:38:09.166556 1181813 kubeadm.go:322]     --control-plane 
	I0229 02:38:09.166568 1181813 kubeadm.go:322] 
	I0229 02:38:09.166664 1181813 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:38:09.166678 1181813 kubeadm.go:322] 
	I0229 02:38:09.166764 1181813 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token o4gudc.kt5xj0na7n1b1bo1 \
	I0229 02:38:09.166872 1181813 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0eed3bb06de93eaacfde26833aa0934eb72e0c80231d6eec065ff79fcf497e29 
	I0229 02:38:09.171068 1181813 kubeadm.go:322] W0229 02:37:44.976760    1231 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 02:38:09.171317 1181813 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0229 02:38:09.171430 1181813 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:38:09.171567 1181813 kubeadm.go:322] W0229 02:37:54.668093    1231 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 02:38:09.171714 1181813 kubeadm.go:322] W0229 02:37:54.670432    1231 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 02:38:09.171743 1181813 cni.go:84] Creating CNI manager for ""
	I0229 02:38:09.171757 1181813 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0229 02:38:09.173971 1181813 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0229 02:38:09.176201 1181813 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 02:38:09.180101 1181813 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0229 02:38:09.180122 1181813 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 02:38:09.199784 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 02:38:09.616904 1181813 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:38:09.617042 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:09.617124 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=ingress-addon-legacy-080946 minikube.k8s.io/updated_at=2024_02_29T02_38_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:09.756174 1181813 ops.go:34] apiserver oom_adj: -16
	I0229 02:38:09.756276 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:10.256423 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:10.756396 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:11.256668 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:11.757224 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:12.257136 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:12.757040 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:13.257358 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:13.757124 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:14.257205 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:14.757262 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:15.256573 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:15.757187 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:16.257218 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:16.756880 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:17.256773 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:17.756635 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:18.257147 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:18.756927 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:19.256411 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:19.757061 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:20.256879 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:20.757229 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:21.256400 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:21.756490 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:22.256442 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:22.756482 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:23.256418 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:23.757245 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:24.256476 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:24.756445 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:25.257174 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:25.757111 1181813 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:38:25.866618 1181813 kubeadm.go:1088] duration metric: took 16.249628195s to wait for elevateKubeSystemPrivileges.
	I0229 02:38:25.866648 1181813 kubeadm.go:406] StartCluster complete in 41.003604231s
	I0229 02:38:25.866666 1181813 settings.go:142] acquiring lock: {Name:mk749db1aa854bc5a32d1a0b4d36b81f911e799c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:38:25.866723 1181813 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-1148303/kubeconfig
	I0229 02:38:25.867434 1181813 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-1148303/kubeconfig: {Name:mka2c9192ec48968c9ed900867eac085a9478c66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:38:25.868118 1181813 kapi.go:59] client config for ingress-addon-legacy-080946: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.key", CAFile:"/home/jenkins/minikube-integration/18063-1148303/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1704fb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:38:25.868965 1181813 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:38:25.869223 1181813 config.go:182] Loaded profile config "ingress-addon-legacy-080946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 02:38:25.869253 1181813 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:38:25.869314 1181813 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-080946"
	I0229 02:38:25.869327 1181813 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-080946"
	I0229 02:38:25.869366 1181813 host.go:66] Checking if "ingress-addon-legacy-080946" exists ...
	I0229 02:38:25.869798 1181813 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-080946 --format={{.State.Status}}
	I0229 02:38:25.870593 1181813 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 02:38:25.870681 1181813 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-080946"
	I0229 02:38:25.870699 1181813 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-080946"
	I0229 02:38:25.871006 1181813 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-080946 --format={{.State.Status}}
	I0229 02:38:25.917840 1181813 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:38:25.916323 1181813 kapi.go:59] client config for ingress-addon-legacy-080946: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.key", CAFile:"/home/jenkins/minikube-integration/18063-1148303/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1704fb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:38:25.921925 1181813 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-080946"
	I0229 02:38:25.921960 1181813 host.go:66] Checking if "ingress-addon-legacy-080946" exists ...
	I0229 02:38:25.922444 1181813 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-080946 --format={{.State.Status}}
	I0229 02:38:25.922733 1181813 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:38:25.922747 1181813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:38:25.922789 1181813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-080946
	I0229 02:38:25.977016 1181813 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:38:25.977037 1181813 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:38:25.977099 1181813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-080946
	I0229 02:38:25.977270 1181813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34052 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/ingress-addon-legacy-080946/id_rsa Username:docker}
	I0229 02:38:26.003343 1181813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34052 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/ingress-addon-legacy-080946/id_rsa Username:docker}
	I0229 02:38:26.121403 1181813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:38:26.161519 1181813 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:38:26.165708 1181813 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:38:26.466780 1181813 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-080946" context rescaled to 1 replicas
	I0229 02:38:26.466833 1181813 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:38:26.470569 1181813 out.go:177] * Verifying Kubernetes components...
	I0229 02:38:26.472447 1181813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:38:26.747115 1181813 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0229 02:38:26.749263 1181813 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0229 02:38:26.748050 1181813 kapi.go:59] client config for ingress-addon-legacy-080946: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.key", CAFile:"/home/jenkins/minikube-integration/18063-1148303/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1704fb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 02:38:26.751350 1181813 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-080946" to be "Ready" ...
	I0229 02:38:26.751532 1181813 addons.go:505] enable addons completed in 882.276552ms: enabled=[default-storageclass storage-provisioner]
	I0229 02:38:28.755300 1181813 node_ready.go:58] node "ingress-addon-legacy-080946" has status "Ready":"False"
	I0229 02:38:31.254363 1181813 node_ready.go:58] node "ingress-addon-legacy-080946" has status "Ready":"False"
	I0229 02:38:32.754167 1181813 node_ready.go:49] node "ingress-addon-legacy-080946" has status "Ready":"True"
	I0229 02:38:32.754192 1181813 node_ready.go:38] duration metric: took 6.002818466s waiting for node "ingress-addon-legacy-080946" to be "Ready" ...
	I0229 02:38:32.754204 1181813 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:38:32.761532 1181813 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-dlmhz" in "kube-system" namespace to be "Ready" ...
	I0229 02:38:34.765013 1181813 pod_ready.go:102] pod "coredns-66bff467f8-dlmhz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 02:38:25 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0229 02:38:36.766851 1181813 pod_ready.go:102] pod "coredns-66bff467f8-dlmhz" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:38.767886 1181813 pod_ready.go:102] pod "coredns-66bff467f8-dlmhz" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:41.267204 1181813 pod_ready.go:102] pod "coredns-66bff467f8-dlmhz" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:43.766898 1181813 pod_ready.go:102] pod "coredns-66bff467f8-dlmhz" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:44.767103 1181813 pod_ready.go:92] pod "coredns-66bff467f8-dlmhz" in "kube-system" namespace has status "Ready":"True"
	I0229 02:38:44.767189 1181813 pod_ready.go:81] duration metric: took 12.005620227s waiting for pod "coredns-66bff467f8-dlmhz" in "kube-system" namespace to be "Ready" ...
	I0229 02:38:44.767226 1181813 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-080946" in "kube-system" namespace to be "Ready" ...
	I0229 02:38:44.772150 1181813 pod_ready.go:92] pod "etcd-ingress-addon-legacy-080946" in "kube-system" namespace has status "Ready":"True"
	I0229 02:38:44.772175 1181813 pod_ready.go:81] duration metric: took 4.917788ms waiting for pod "etcd-ingress-addon-legacy-080946" in "kube-system" namespace to be "Ready" ...
	I0229 02:38:44.772190 1181813 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-080946" in "kube-system" namespace to be "Ready" ...
	I0229 02:38:44.776992 1181813 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-080946" in "kube-system" namespace has status "Ready":"True"
	I0229 02:38:44.777021 1181813 pod_ready.go:81] duration metric: took 4.822404ms waiting for pod "kube-apiserver-ingress-addon-legacy-080946" in "kube-system" namespace to be "Ready" ...
	I0229 02:38:44.777033 1181813 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-080946" in "kube-system" namespace to be "Ready" ...
	I0229 02:38:44.781849 1181813 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-080946" in "kube-system" namespace has status "Ready":"True"
	I0229 02:38:44.781876 1181813 pod_ready.go:81] duration metric: took 4.834334ms waiting for pod "kube-controller-manager-ingress-addon-legacy-080946" in "kube-system" namespace to be "Ready" ...
	I0229 02:38:44.781888 1181813 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vzb55" in "kube-system" namespace to be "Ready" ...
	I0229 02:38:44.786252 1181813 pod_ready.go:92] pod "kube-proxy-vzb55" in "kube-system" namespace has status "Ready":"True"
	I0229 02:38:44.786277 1181813 pod_ready.go:81] duration metric: took 4.382316ms waiting for pod "kube-proxy-vzb55" in "kube-system" namespace to be "Ready" ...
	I0229 02:38:44.786288 1181813 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-080946" in "kube-system" namespace to be "Ready" ...
	I0229 02:38:44.962698 1181813 request.go:629] Waited for 176.343974ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-080946
	I0229 02:38:45.162386 1181813 request.go:629] Waited for 196.441314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-080946
	I0229 02:38:45.169052 1181813 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-080946" in "kube-system" namespace has status "Ready":"True"
	I0229 02:38:45.169141 1181813 pod_ready.go:81] duration metric: took 382.844321ms waiting for pod "kube-scheduler-ingress-addon-legacy-080946" in "kube-system" namespace to be "Ready" ...
	I0229 02:38:45.169174 1181813 pod_ready.go:38] duration metric: took 12.414956155s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:38:45.169258 1181813 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:38:45.169375 1181813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:38:45.189486 1181813 api_server.go:72] duration metric: took 18.722598154s to wait for apiserver process to appear ...
	I0229 02:38:45.189525 1181813 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:38:45.189552 1181813 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0229 02:38:45.200857 1181813 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0229 02:38:45.201907 1181813 api_server.go:141] control plane version: v1.18.20
	I0229 02:38:45.201943 1181813 api_server.go:131] duration metric: took 12.409ms to wait for apiserver health ...
	I0229 02:38:45.201954 1181813 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:38:45.362426 1181813 request.go:629] Waited for 160.400414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0229 02:38:45.371055 1181813 system_pods.go:59] 8 kube-system pods found
	I0229 02:38:45.371193 1181813 system_pods.go:61] "coredns-66bff467f8-dlmhz" [32f3296e-a681-4d38-9213-d1e77953c07c] Running
	I0229 02:38:45.371247 1181813 system_pods.go:61] "etcd-ingress-addon-legacy-080946" [6914fb1d-9fe3-4709-9980-535a94bf2f6b] Running
	I0229 02:38:45.371278 1181813 system_pods.go:61] "kindnet-vnbt5" [46ac7908-bfed-4887-8244-cf5aaa819ede] Running
	I0229 02:38:45.371307 1181813 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-080946" [8dcecc23-48b6-4819-9caf-8e8e295a8a01] Running
	I0229 02:38:45.371376 1181813 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-080946" [5fa0c841-df71-4bfb-92f2-f1cb610b48ae] Running
	I0229 02:38:45.371419 1181813 system_pods.go:61] "kube-proxy-vzb55" [e8a054cf-9017-45c9-bd0f-f7da64f2eb11] Running
	I0229 02:38:45.371442 1181813 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-080946" [e155b795-0929-4fca-ba60-928be0ba9ac3] Running
	I0229 02:38:45.371485 1181813 system_pods.go:61] "storage-provisioner" [6f6c22f0-4ca0-4271-bc24-c1e7ce34247b] Running
	I0229 02:38:45.371513 1181813 system_pods.go:74] duration metric: took 169.552187ms to wait for pod list to return data ...
	I0229 02:38:45.371546 1181813 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:38:45.561918 1181813 request.go:629] Waited for 190.279824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0229 02:38:45.565310 1181813 default_sa.go:45] found service account: "default"
	I0229 02:38:45.565348 1181813 default_sa.go:55] duration metric: took 193.790868ms for default service account to be created ...
	I0229 02:38:45.565363 1181813 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:38:45.762792 1181813 request.go:629] Waited for 197.346235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0229 02:38:45.769167 1181813 system_pods.go:86] 8 kube-system pods found
	I0229 02:38:45.769260 1181813 system_pods.go:89] "coredns-66bff467f8-dlmhz" [32f3296e-a681-4d38-9213-d1e77953c07c] Running
	I0229 02:38:45.769289 1181813 system_pods.go:89] "etcd-ingress-addon-legacy-080946" [6914fb1d-9fe3-4709-9980-535a94bf2f6b] Running
	I0229 02:38:45.769303 1181813 system_pods.go:89] "kindnet-vnbt5" [46ac7908-bfed-4887-8244-cf5aaa819ede] Running
	I0229 02:38:45.769309 1181813 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-080946" [8dcecc23-48b6-4819-9caf-8e8e295a8a01] Running
	I0229 02:38:45.769315 1181813 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-080946" [5fa0c841-df71-4bfb-92f2-f1cb610b48ae] Running
	I0229 02:38:45.769319 1181813 system_pods.go:89] "kube-proxy-vzb55" [e8a054cf-9017-45c9-bd0f-f7da64f2eb11] Running
	I0229 02:38:45.769323 1181813 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-080946" [e155b795-0929-4fca-ba60-928be0ba9ac3] Running
	I0229 02:38:45.769327 1181813 system_pods.go:89] "storage-provisioner" [6f6c22f0-4ca0-4271-bc24-c1e7ce34247b] Running
	I0229 02:38:45.769334 1181813 system_pods.go:126] duration metric: took 203.965068ms to wait for k8s-apps to be running ...
	I0229 02:38:45.769346 1181813 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:38:45.769407 1181813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:38:45.781181 1181813 system_svc.go:56] duration metric: took 11.829123ms WaitForService to wait for kubelet.
	I0229 02:38:45.781209 1181813 kubeadm.go:581] duration metric: took 19.314329629s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:38:45.781230 1181813 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:38:45.962845 1181813 request.go:629] Waited for 181.547241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0229 02:38:45.965697 1181813 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0229 02:38:45.965731 1181813 node_conditions.go:123] node cpu capacity is 2
	I0229 02:38:45.965743 1181813 node_conditions.go:105] duration metric: took 184.508084ms to run NodePressure ...
	I0229 02:38:45.965775 1181813 start.go:228] waiting for startup goroutines ...
	I0229 02:38:45.965790 1181813 start.go:233] waiting for cluster config update ...
	I0229 02:38:45.965804 1181813 start.go:242] writing updated cluster config ...
	I0229 02:38:45.966101 1181813 ssh_runner.go:195] Run: rm -f paused
	I0229 02:38:46.026123 1181813 start.go:601] kubectl: 1.29.2, cluster: 1.18.20 (minor skew: 11)
	I0229 02:38:46.028502 1181813 out.go:177] 
	W0229 02:38:46.030758 1181813 out.go:239] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0229 02:38:46.032606 1181813 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0229 02:38:46.034449 1181813 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-080946" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 02:41:51 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:51.583580584Z" level=info msg="Created container fda11790e040a0e26cc8bd391628259aa74d469bbcfe833c6b839e7d5b564ddf: default/hello-world-app-5f5d8b66bb-jr5d7/hello-world-app" id=7c826767-e9bb-4c8f-bd57-bb2d5937ee74 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Feb 29 02:41:51 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:51.585839031Z" level=info msg="Starting container: fda11790e040a0e26cc8bd391628259aa74d469bbcfe833c6b839e7d5b564ddf" id=5ee697be-8aac-443b-b1cd-ecf9bb864b4b name=/runtime.v1alpha2.RuntimeService/StartContainer
	Feb 29 02:41:51 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:51.600885098Z" level=info msg="Started container" PID=3712 containerID=fda11790e040a0e26cc8bd391628259aa74d469bbcfe833c6b839e7d5b564ddf description=default/hello-world-app-5f5d8b66bb-jr5d7/hello-world-app id=5ee697be-8aac-443b-b1cd-ecf9bb864b4b name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=e7af28792fd82e19c21f57e3594e7a730e9e1fb704d6ded9e3ddec73abeefb08
	Feb 29 02:41:51 ingress-addon-legacy-080946 conmon[3701]: conmon fda11790e040a0e26cc8 <ninfo>: container 3712 exited with status 1
	Feb 29 02:41:51 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:51.860869161Z" level=info msg="Stopping container: f5eca400dcc53624e1ca405d7a0a98b9bc1798ad70b1dd1146b6b51be7ae0e7e (timeout: 2s)" id=6232b46a-b0ec-43c2-a0c2-4250b2ff0fcf name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 29 02:41:51 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:51.881395267Z" level=info msg="Stopping container: f5eca400dcc53624e1ca405d7a0a98b9bc1798ad70b1dd1146b6b51be7ae0e7e (timeout: 2s)" id=fa5f282f-4d71-4d42-ba72-926621836155 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 29 02:41:51 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:51.915972697Z" level=info msg="Removing container: 4ae65cbfe015b866098db2c60342e27b08cc96984acdc0e4170f5a348a56efdf" id=f1db348d-7665-4a79-8a92-d29e41ac7488 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Feb 29 02:41:51 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:51.944670690Z" level=info msg="Removed container 4ae65cbfe015b866098db2c60342e27b08cc96984acdc0e4170f5a348a56efdf: default/hello-world-app-5f5d8b66bb-jr5d7/hello-world-app" id=f1db348d-7665-4a79-8a92-d29e41ac7488 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Feb 29 02:41:52 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:52.483859461Z" level=info msg="Stopping pod sandbox: bf5c88d1b30a26c6467ccf0f975dfd34cb58ae6fdecf57ed02ad17066d1488ce" id=6e560795-fcb0-4bbd-8010-cc6aeeb13b4e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 29 02:41:52 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:52.483904580Z" level=info msg="Stopped pod sandbox (already stopped): bf5c88d1b30a26c6467ccf0f975dfd34cb58ae6fdecf57ed02ad17066d1488ce" id=6e560795-fcb0-4bbd-8010-cc6aeeb13b4e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 29 02:41:53 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:53.875919578Z" level=warning msg="Stopping container f5eca400dcc53624e1ca405d7a0a98b9bc1798ad70b1dd1146b6b51be7ae0e7e with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=6232b46a-b0ec-43c2-a0c2-4250b2ff0fcf name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 29 02:41:53 ingress-addon-legacy-080946 conmon[2771]: conmon f5eca400dcc53624e1ca <ninfo>: container 2782 exited with status 137
	Feb 29 02:41:54 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:54.037152314Z" level=info msg="Stopped container f5eca400dcc53624e1ca405d7a0a98b9bc1798ad70b1dd1146b6b51be7ae0e7e: ingress-nginx/ingress-nginx-controller-7fcf777cb7-xw4jv/controller" id=fa5f282f-4d71-4d42-ba72-926621836155 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 29 02:41:54 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:54.037780987Z" level=info msg="Stopped container f5eca400dcc53624e1ca405d7a0a98b9bc1798ad70b1dd1146b6b51be7ae0e7e: ingress-nginx/ingress-nginx-controller-7fcf777cb7-xw4jv/controller" id=6232b46a-b0ec-43c2-a0c2-4250b2ff0fcf name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 29 02:41:54 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:54.037868371Z" level=info msg="Stopping pod sandbox: 66309c58a55adbe06ceb2548c997a774536069fc6c6d3dd6341f312c228b059e" id=9f4faa36-f50c-4e6a-b59e-ef7773e641ca name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 29 02:41:54 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:54.038456109Z" level=info msg="Stopping pod sandbox: 66309c58a55adbe06ceb2548c997a774536069fc6c6d3dd6341f312c228b059e" id=42e86131-c37d-410a-ba05-2544461fe14f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 29 02:41:54 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:54.041704137Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-WUCRZX4SQ5PWJU5I - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-FQRTAXFSBLMKPD6E - [0:0]\n-X KUBE-HP-FQRTAXFSBLMKPD6E\n-X KUBE-HP-WUCRZX4SQ5PWJU5I\nCOMMIT\n"
	Feb 29 02:41:54 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:54.043110389Z" level=info msg="Closing host port tcp:80"
	Feb 29 02:41:54 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:54.043164017Z" level=info msg="Closing host port tcp:443"
	Feb 29 02:41:54 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:54.044254940Z" level=info msg="Host port tcp:80 does not have an open socket"
	Feb 29 02:41:54 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:54.044289417Z" level=info msg="Host port tcp:443 does not have an open socket"
	Feb 29 02:41:54 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:54.044442861Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-xw4jv Namespace:ingress-nginx ID:66309c58a55adbe06ceb2548c997a774536069fc6c6d3dd6341f312c228b059e UID:997fc213-5c70-4be9-a851-9e470b8f6fc7 NetNS:/var/run/netns/9da324a8-9027-4782-b2a5-61cecb43d0e2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 29 02:41:54 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:54.044587065Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-xw4jv from CNI network \"kindnet\" (type=ptp)"
	Feb 29 02:41:54 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:54.073764817Z" level=info msg="Stopped pod sandbox: 66309c58a55adbe06ceb2548c997a774536069fc6c6d3dd6341f312c228b059e" id=9f4faa36-f50c-4e6a-b59e-ef7773e641ca name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 29 02:41:54 ingress-addon-legacy-080946 crio[904]: time="2024-02-29 02:41:54.073921739Z" level=info msg="Stopped pod sandbox (already stopped): 66309c58a55adbe06ceb2548c997a774536069fc6c6d3dd6341f312c228b059e" id=42e86131-c37d-410a-ba05-2544461fe14f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fda11790e040a       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                   8 seconds ago       Exited              hello-world-app           2                   e7af28792fd82       hello-world-app-5f5d8b66bb-jr5d7
	6d1218cf11e69       docker.io/library/nginx@sha256:34aa0a372d3220dc0448131f809c72d8085f79bdec8058ad6970fc034a395674                    2 minutes ago       Running             nginx                     0                   484614491dd51       nginx
	f5eca400dcc53       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   66309c58a55ad       ingress-nginx-controller-7fcf777cb7-xw4jv
	5185035fef3c2       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   962a368f25ae4       ingress-nginx-admission-patch-whr55
	4f75fb8864b47       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   641cc66c92c4e       ingress-nginx-admission-create-qbhzs
	866125e608c8a       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   e92861f80249c       storage-provisioner
	30a945f54b591       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   95561c0b47004       coredns-66bff467f8-dlmhz
	d28efd5188b91       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988                 3 minutes ago       Running             kindnet-cni               0                   b93dc7b0465c0       kindnet-vnbt5
	ef268652186a4       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   77f3fca25d8c7       kube-proxy-vzb55
	2ae3b20483637       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   4 minutes ago       Running             kube-scheduler            0                   cafd22d9d65ea       kube-scheduler-ingress-addon-legacy-080946
	9d6cebb6ac39a       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   4 minutes ago       Running             kube-apiserver            0                   50c203ee47152       kube-apiserver-ingress-addon-legacy-080946
	56fae37ee805c       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   4 minutes ago       Running             kube-controller-manager   0                   0eec8706b9458       kube-controller-manager-ingress-addon-legacy-080946
	aa81b40ee16a3       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   4 minutes ago       Running             etcd                      0                   829ac2e0c6d8d       etcd-ingress-addon-legacy-080946
	
	
	==> coredns [30a945f54b591a886d7bb2980544b42651c2203582190b21121f9cc9859a6069] <==
	[INFO] 10.244.0.5:37493 - 18790 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043437s
	[INFO] 10.244.0.5:38932 - 15607 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001862626s
	[INFO] 10.244.0.5:37493 - 30912 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00137371s
	[INFO] 10.244.0.5:37493 - 62625 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000967772s
	[INFO] 10.244.0.5:38932 - 2915 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001709905s
	[INFO] 10.244.0.5:38932 - 21328 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000101218s
	[INFO] 10.244.0.5:37493 - 50767 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000163683s
	[INFO] 10.244.0.5:57148 - 62262 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000080598s
	[INFO] 10.244.0.5:39682 - 64470 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000052816s
	[INFO] 10.244.0.5:39682 - 50136 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000057681s
	[INFO] 10.244.0.5:39682 - 42879 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045407s
	[INFO] 10.244.0.5:39682 - 20727 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003799s
	[INFO] 10.244.0.5:39682 - 18942 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035019s
	[INFO] 10.244.0.5:39682 - 49933 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061046s
	[INFO] 10.244.0.5:57148 - 48619 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000040106s
	[INFO] 10.244.0.5:57148 - 42913 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037596s
	[INFO] 10.244.0.5:57148 - 3952 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034388s
	[INFO] 10.244.0.5:57148 - 48010 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034905s
	[INFO] 10.244.0.5:57148 - 1190 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037965s
	[INFO] 10.244.0.5:39682 - 63222 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001233124s
	[INFO] 10.244.0.5:39682 - 15939 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001182565s
	[INFO] 10.244.0.5:57148 - 36496 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001217707s
	[INFO] 10.244.0.5:39682 - 53504 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000046597s
	[INFO] 10.244.0.5:57148 - 30027 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001023821s
	[INFO] 10.244.0.5:57148 - 29077 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041345s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-080946
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-080946
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=ingress-addon-legacy-080946
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T02_38_09_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:38:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-080946
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:41:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:41:42 +0000   Thu, 29 Feb 2024 02:37:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:41:42 +0000   Thu, 29 Feb 2024 02:37:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:41:42 +0000   Thu, 29 Feb 2024 02:37:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:41:42 +0000   Thu, 29 Feb 2024 02:38:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-080946
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f92b1cd2cd64e478d3c3c3908bc3604
	  System UUID:                0fc88878-c22a-4a90-9825-216387bd94bd
	  Boot ID:                    d15cd6b5-a0a6-45f5-95b2-2521c5763941
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-jr5d7                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-dlmhz                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m34s
	  kube-system                 etcd-ingress-addon-legacy-080946                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kindnet-vnbt5                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m34s
	  kube-system                 kube-apiserver-ingress-addon-legacy-080946             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-080946    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-proxy-vzb55                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 kube-scheduler-ingress-addon-legacy-080946             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 4m2s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m2s (x5 over 4m2s)  kubelet     Node ingress-addon-legacy-080946 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x5 over 4m2s)  kubelet     Node ingress-addon-legacy-080946 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x5 over 4m2s)  kubelet     Node ingress-addon-legacy-080946 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m47s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m47s                kubelet     Node ingress-addon-legacy-080946 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m47s                kubelet     Node ingress-addon-legacy-080946 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m47s                kubelet     Node ingress-addon-legacy-080946 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m33s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m27s                kubelet     Node ingress-addon-legacy-080946 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001140] FS-Cache: O-key=[8] 'dc3f5c0100000000'
	[  +0.000853] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001015] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=00000000aa687443
	[  +0.001118] FS-Cache: N-key=[8] 'dc3f5c0100000000'
	[  +0.003308] FS-Cache: Duplicate cookie detected
	[  +0.000751] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001021] FS-Cache: O-cookie d=000000007d8e8356{9p.inode} n=000000008300a153
	[  +0.001080] FS-Cache: O-key=[8] 'dc3f5c0100000000'
	[  +0.000780] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000992] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=0000000030e39541
	[  +0.001094] FS-Cache: N-key=[8] 'dc3f5c0100000000'
	[  +2.358173] FS-Cache: Duplicate cookie detected
	[  +0.000778] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001287] FS-Cache: O-cookie d=000000007d8e8356{9p.inode} n=0000000069fd55be
	[  +0.001174] FS-Cache: O-key=[8] 'db3f5c0100000000'
	[  +0.000815] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001248] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=00000000aa687443
	[  +0.001191] FS-Cache: N-key=[8] 'db3f5c0100000000'
	[  +0.417142] FS-Cache: Duplicate cookie detected
	[  +0.000713] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000986] FS-Cache: O-cookie d=000000007d8e8356{9p.inode} n=0000000058168a0d
	[  +0.001094] FS-Cache: O-key=[8] 'e13f5c0100000000'
	[  +0.000951] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001209] FS-Cache: N-cookie d=000000007d8e8356{9p.inode} n=000000005473ea81
	[  +0.001136] FS-Cache: N-key=[8] 'e13f5c0100000000'
	
	
	==> etcd [aa81b40ee16a31896b36fa7f2efd6e3d2e2f10cb2b67ca1b7fc66391f33e2d8b] <==
	raft2024/02/29 02:37:58 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-02-29 02:37:58.846744 W | auth: simple token is not cryptographically signed
	2024-02-29 02:37:58.849457 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-02-29 02:37:58.851466 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/02/29 02:37:58 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-02-29 02:37:58.851921 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-02-29 02:37:58.852189 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-02-29 02:37:58.852265 I | embed: listening for peers on 192.168.49.2:2380
	2024-02-29 02:37:58.852467 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/02/29 02:37:59 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/02/29 02:37:59 INFO: aec36adc501070cc became candidate at term 2
	raft2024/02/29 02:37:59 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/02/29 02:37:59 INFO: aec36adc501070cc became leader at term 2
	raft2024/02/29 02:37:59 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-02-29 02:37:59.054333 I | etcdserver: setting up the initial cluster version to 3.4
	2024-02-29 02:37:59.076355 I | etcdserver: published {Name:ingress-addon-legacy-080946 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-02-29 02:37:59.076388 I | embed: ready to serve client requests
	2024-02-29 02:37:59.219891 I | embed: serving client requests on 127.0.0.1:2379
	2024-02-29 02:37:59.496044 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-02-29 02:37:59.496164 I | etcdserver/api: enabled capabilities for version 3.4
	2024-02-29 02:37:59.496292 W | etcdserver: request "ID:8128027501135574788 Method:\"PUT\" Path:\"/0/version\" Val:\"3.4.0\" " with result "" took too long (419.866538ms) to execute
	2024-02-29 02:37:59.528123 I | embed: ready to serve client requests
	2024-02-29 02:37:59.529506 I | embed: serving client requests on 192.168.49.2:2379
	2024-02-29 02:38:00.155816 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true " with result "range_response_count:0 size:4" took too long (128.079217ms) to execute
	2024-02-29 02:38:02.032076 W | etcdserver: read-only range request "key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true " with result "range_response_count:0 size:4" took too long (153.452868ms) to execute
	
	
	==> kernel <==
	 02:41:59 up  6:24,  0 users,  load average: 0.24, 0.88, 1.59
	Linux ingress-addon-legacy-080946 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [d28efd5188b91db90def614b69a2a1499b9b62450e994e5fd73063e29d5cb40d] <==
	I0229 02:39:59.493691       1 main.go:227] handling current node
	I0229 02:40:09.505331       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:40:09.505457       1 main.go:227] handling current node
	I0229 02:40:19.514782       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:40:19.514809       1 main.go:227] handling current node
	I0229 02:40:29.518751       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:40:29.518796       1 main.go:227] handling current node
	I0229 02:40:39.521920       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:40:39.521946       1 main.go:227] handling current node
	I0229 02:40:49.533226       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:40:49.533256       1 main.go:227] handling current node
	I0229 02:40:59.539580       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:40:59.539607       1 main.go:227] handling current node
	I0229 02:41:09.546690       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:41:09.546720       1 main.go:227] handling current node
	I0229 02:41:19.550142       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:41:19.550170       1 main.go:227] handling current node
	I0229 02:41:29.554312       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:41:29.554341       1 main.go:227] handling current node
	I0229 02:41:39.562199       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:41:39.562228       1 main.go:227] handling current node
	I0229 02:41:49.566315       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:41:49.566343       1 main.go:227] handling current node
	I0229 02:41:59.577332       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0229 02:41:59.577441       1 main.go:227] handling current node
	
	
	==> kube-apiserver [9d6cebb6ac39a7c6d7e197ae79bb7d8a496a4c689338a1ab1fab91ab2c24a9b2] <==
	I0229 02:38:06.401271       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0229 02:38:06.402835       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 02:38:06.409910       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 02:38:06.409944       1 cache.go:39] Caches are synced for autoregister controller
	I0229 02:38:06.483581       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0229 02:38:07.199018       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0229 02:38:07.199047       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0229 02:38:07.212482       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0229 02:38:07.215977       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0229 02:38:07.216018       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0229 02:38:07.581089       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0229 02:38:07.626573       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0229 02:38:07.741672       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0229 02:38:07.742613       1 controller.go:609] quota admission added evaluator for: endpoints
	I0229 02:38:07.748617       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0229 02:38:08.647500       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0229 02:38:09.046872       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0229 02:38:09.148766       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0229 02:38:12.399961       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 02:38:25.364207       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0229 02:38:25.825782       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0229 02:38:46.876239       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0229 02:39:13.840411       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0229 02:41:51.881714       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E0229 02:41:52.445748       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [56fae37ee805c892ed7ff14d4bb46c2a42eceff46aee20f08bda5a93c4e99e70] <==
	I0229 02:38:25.902300       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I0229 02:38:25.904261       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0229 02:38:25.904338       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0229 02:38:25.904978       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0229 02:38:25.953985       1 shared_informer.go:230] Caches are synced for disruption 
	I0229 02:38:25.961316       1 disruption.go:339] Sending events to api server.
	I0229 02:38:25.961560       1 shared_informer.go:230] Caches are synced for resource quota 
	I0229 02:38:25.991156       1 shared_informer.go:230] Caches are synced for resource quota 
	E0229 02:38:26.087968       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"9a96ef10-c89f-4860-9a67-bab98717d181", ResourceVersion:"242", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63844771089, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240202-8f1494ea\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000c48c40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000c48da0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000c48e00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000c48ea0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000c48f80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000c48fe0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240202-8f1494ea", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000c490a0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000c490e0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40008fcc80), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001070398), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004f24d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000f9c8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40010703e0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E0229 02:38:26.100448       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"32437de3-3824-42bc-8df0-2bd1e6aaf1b6", ResourceVersion:"227", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63844771089, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000c49180), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x4000c491a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000c491c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4000842d40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x4000c491e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000c49200), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000c49260)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40008fcdc0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001070558), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004f2540), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000f9d0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40010705a8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0229 02:38:26.301056       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"08ef6606-c096-40a1-9a91-43bee97df629", APIVersion:"apps/v1", ResourceVersion:"376", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	E0229 02:38:26.372664       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"9a96ef10-c89f-4860-9a67-bab98717d181", ResourceVersion:"364", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63844771089, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240202-8f1494ea\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001bfe660), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001bfe680)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001bfe6a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001bfe6c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001bfe6e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"",
UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001bfe700), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*
v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001bfe720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStore
VolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.
CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001bfe740), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*
v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240202-8f1494ea", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001bfe760)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001bfe7a0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10
0m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1
.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001c00500), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001bf8348), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400086a620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Tolera
tion{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001a42db8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001bf8390)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please a
pply your changes to the latest version and try again
	I0229 02:38:26.546379       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"c38316b9-3ea6-4978-a970-7658d8359265", APIVersion:"apps/v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-z6b98
	I0229 02:38:35.352525       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0229 02:38:46.851656       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"856c119d-2d1d-4c75-a53b-f5b403f2fd99", APIVersion:"apps/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0229 02:38:46.870612       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"03a7ba93-e55a-4b17-b69a-addf2799887e", APIVersion:"apps/v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-xw4jv
	I0229 02:38:46.902402       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"41161905-7028-498d-8901-82c722c26dc3", APIVersion:"batch/v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-qbhzs
	I0229 02:38:46.952905       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"c6a27fde-ead4-4b94-96d9-d9dff1056a6c", APIVersion:"batch/v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-whr55
	I0229 02:38:49.587927       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"41161905-7028-498d-8901-82c722c26dc3", APIVersion:"batch/v1", ResourceVersion:"507", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0229 02:38:49.603488       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"c6a27fde-ead4-4b94-96d9-d9dff1056a6c", APIVersion:"batch/v1", ResourceVersion:"517", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0229 02:41:33.253295       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"b5814d64-ed19-417b-81be-c409fb6d537c", APIVersion:"apps/v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0229 02:41:33.275921       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"a7e18f5d-3eeb-4eec-a77f-b36fb962b9ab", APIVersion:"apps/v1", ResourceVersion:"732", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-jr5d7
	
	
	==> kube-proxy [ef268652186a4fd1c931d3a0882243a17ef5d698a9ce2d2f2ed1ef026b9e61f9] <==
	W0229 02:38:26.719515       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0229 02:38:26.731803       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0229 02:38:26.731922       1 server_others.go:186] Using iptables Proxier.
	I0229 02:38:26.732250       1 server.go:583] Version: v1.18.20
	I0229 02:38:26.734152       1 config.go:133] Starting endpoints config controller
	I0229 02:38:26.734224       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0229 02:38:26.734323       1 config.go:315] Starting service config controller
	I0229 02:38:26.734355       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0229 02:38:26.839623       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0229 02:38:26.839645       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [2ae3b20483637c34381b0a883e6001afc4156c9b827cd6fb4e09081e246500e5] <==
	W0229 02:38:06.360188       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 02:38:06.360220       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 02:38:06.399829       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0229 02:38:06.399856       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0229 02:38:06.402285       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0229 02:38:06.402482       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:38:06.402552       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:38:06.402637       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0229 02:38:06.413022       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 02:38:06.413504       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 02:38:06.414410       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 02:38:06.415460       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 02:38:06.416261       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 02:38:06.416431       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 02:38:06.416590       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 02:38:06.416738       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 02:38:06.416878       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 02:38:06.417024       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 02:38:06.417259       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 02:38:06.417482       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 02:38:07.414923       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 02:38:07.424764       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0229 02:38:10.604124       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0229 02:38:25.476846       1 factory.go:503] pod: kube-system/coredns-66bff467f8-z6b98 is already present in the active queue
	E0229 02:38:25.501241       1 factory.go:503] pod: kube-system/coredns-66bff467f8-dlmhz is already present in the active queue
	
	
	==> kubelet <==
	Feb 29 02:41:37 ingress-addon-legacy-080946 kubelet[1672]: I0229 02:41:37.891774    1672 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7bbf9bddb7e86bcc70c92bf33f8a118a3bcc0d1e2bbef54c3759a21222b2134d
	Feb 29 02:41:37 ingress-addon-legacy-080946 kubelet[1672]: I0229 02:41:37.891949    1672 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4ae65cbfe015b866098db2c60342e27b08cc96984acdc0e4170f5a348a56efdf
	Feb 29 02:41:37 ingress-addon-legacy-080946 kubelet[1672]: E0229 02:41:37.892986    1672 pod_workers.go:191] Error syncing pod 883d1cd4-3d73-4992-9d4d-994d9dbf5d51 ("hello-world-app-5f5d8b66bb-jr5d7_default(883d1cd4-3d73-4992-9d4d-994d9dbf5d51)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-jr5d7_default(883d1cd4-3d73-4992-9d4d-994d9dbf5d51)"
	Feb 29 02:41:38 ingress-addon-legacy-080946 kubelet[1672]: I0229 02:41:38.894306    1672 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4ae65cbfe015b866098db2c60342e27b08cc96984acdc0e4170f5a348a56efdf
	Feb 29 02:41:38 ingress-addon-legacy-080946 kubelet[1672]: E0229 02:41:38.894540    1672 pod_workers.go:191] Error syncing pod 883d1cd4-3d73-4992-9d4d-994d9dbf5d51 ("hello-world-app-5f5d8b66bb-jr5d7_default(883d1cd4-3d73-4992-9d4d-994d9dbf5d51)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-jr5d7_default(883d1cd4-3d73-4992-9d4d-994d9dbf5d51)"
	Feb 29 02:41:40 ingress-addon-legacy-080946 kubelet[1672]: E0229 02:41:40.484948    1672 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 29 02:41:40 ingress-addon-legacy-080946 kubelet[1672]: E0229 02:41:40.484989    1672 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 29 02:41:40 ingress-addon-legacy-080946 kubelet[1672]: E0229 02:41:40.485031    1672 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 29 02:41:40 ingress-addon-legacy-080946 kubelet[1672]: E0229 02:41:40.485062    1672 pod_workers.go:191] Error syncing pod 95967e2b-1b5a-4790-8d37-abbcee4dbd9e ("kube-ingress-dns-minikube_kube-system(95967e2b-1b5a-4790-8d37-abbcee4dbd9e)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Feb 29 02:41:49 ingress-addon-legacy-080946 kubelet[1672]: I0229 02:41:49.306702    1672 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-d5cn4" (UniqueName: "kubernetes.io/secret/95967e2b-1b5a-4790-8d37-abbcee4dbd9e-minikube-ingress-dns-token-d5cn4") pod "95967e2b-1b5a-4790-8d37-abbcee4dbd9e" (UID: "95967e2b-1b5a-4790-8d37-abbcee4dbd9e")
	Feb 29 02:41:49 ingress-addon-legacy-080946 kubelet[1672]: I0229 02:41:49.310479    1672 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95967e2b-1b5a-4790-8d37-abbcee4dbd9e-minikube-ingress-dns-token-d5cn4" (OuterVolumeSpecName: "minikube-ingress-dns-token-d5cn4") pod "95967e2b-1b5a-4790-8d37-abbcee4dbd9e" (UID: "95967e2b-1b5a-4790-8d37-abbcee4dbd9e"). InnerVolumeSpecName "minikube-ingress-dns-token-d5cn4". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 29 02:41:49 ingress-addon-legacy-080946 kubelet[1672]: I0229 02:41:49.407028    1672 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-d5cn4" (UniqueName: "kubernetes.io/secret/95967e2b-1b5a-4790-8d37-abbcee4dbd9e-minikube-ingress-dns-token-d5cn4") on node "ingress-addon-legacy-080946" DevicePath ""
	Feb 29 02:41:51 ingress-addon-legacy-080946 kubelet[1672]: I0229 02:41:51.484580    1672 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4ae65cbfe015b866098db2c60342e27b08cc96984acdc0e4170f5a348a56efdf
	Feb 29 02:41:51 ingress-addon-legacy-080946 kubelet[1672]: E0229 02:41:51.867233    1672 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-xw4jv.17b83529b6916895", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-xw4jv", UID:"997fc213-5c70-4be9-a851-9e470b8f6fc7", APIVersion:"v1", ResourceVersion:"497", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-080946"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc170199bf342d295, ext:222879162598, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc170199bf342d295, ext:222879162598, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-xw4jv.17b83529b6916895" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 29 02:41:51 ingress-addon-legacy-080946 kubelet[1672]: E0229 02:41:51.887399    1672 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-xw4jv.17b83529b6916895", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-xw4jv", UID:"997fc213-5c70-4be9-a851-9e470b8f6fc7", APIVersion:"v1", ResourceVersion:"497", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-080946"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc170199bf342d295, ext:222879162598, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc170199bf47fcf3d, ext:222899936646, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-xw4jv.17b83529b6916895" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 29 02:41:51 ingress-addon-legacy-080946 kubelet[1672]: I0229 02:41:51.914130    1672 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4ae65cbfe015b866098db2c60342e27b08cc96984acdc0e4170f5a348a56efdf
	Feb 29 02:41:51 ingress-addon-legacy-080946 kubelet[1672]: I0229 02:41:51.914377    1672 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fda11790e040a0e26cc8bd391628259aa74d469bbcfe833c6b839e7d5b564ddf
	Feb 29 02:41:51 ingress-addon-legacy-080946 kubelet[1672]: E0229 02:41:51.914608    1672 pod_workers.go:191] Error syncing pod 883d1cd4-3d73-4992-9d4d-994d9dbf5d51 ("hello-world-app-5f5d8b66bb-jr5d7_default(883d1cd4-3d73-4992-9d4d-994d9dbf5d51)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-jr5d7_default(883d1cd4-3d73-4992-9d4d-994d9dbf5d51)"
	Feb 29 02:41:54 ingress-addon-legacy-080946 kubelet[1672]: W0229 02:41:54.920781    1672 pod_container_deletor.go:77] Container "66309c58a55adbe06ceb2548c997a774536069fc6c6d3dd6341f312c228b059e" not found in pod's containers
	Feb 29 02:41:56 ingress-addon-legacy-080946 kubelet[1672]: I0229 02:41:56.022693    1672 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-r2b2j" (UniqueName: "kubernetes.io/secret/997fc213-5c70-4be9-a851-9e470b8f6fc7-ingress-nginx-token-r2b2j") pod "997fc213-5c70-4be9-a851-9e470b8f6fc7" (UID: "997fc213-5c70-4be9-a851-9e470b8f6fc7")
	Feb 29 02:41:56 ingress-addon-legacy-080946 kubelet[1672]: I0229 02:41:56.022787    1672 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/997fc213-5c70-4be9-a851-9e470b8f6fc7-webhook-cert") pod "997fc213-5c70-4be9-a851-9e470b8f6fc7" (UID: "997fc213-5c70-4be9-a851-9e470b8f6fc7")
	Feb 29 02:41:56 ingress-addon-legacy-080946 kubelet[1672]: I0229 02:41:56.027565    1672 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/997fc213-5c70-4be9-a851-9e470b8f6fc7-ingress-nginx-token-r2b2j" (OuterVolumeSpecName: "ingress-nginx-token-r2b2j") pod "997fc213-5c70-4be9-a851-9e470b8f6fc7" (UID: "997fc213-5c70-4be9-a851-9e470b8f6fc7"). InnerVolumeSpecName "ingress-nginx-token-r2b2j". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 29 02:41:56 ingress-addon-legacy-080946 kubelet[1672]: I0229 02:41:56.029668    1672 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/997fc213-5c70-4be9-a851-9e470b8f6fc7-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "997fc213-5c70-4be9-a851-9e470b8f6fc7" (UID: "997fc213-5c70-4be9-a851-9e470b8f6fc7"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 29 02:41:56 ingress-addon-legacy-080946 kubelet[1672]: I0229 02:41:56.123087    1672 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/997fc213-5c70-4be9-a851-9e470b8f6fc7-webhook-cert") on node "ingress-addon-legacy-080946" DevicePath ""
	Feb 29 02:41:56 ingress-addon-legacy-080946 kubelet[1672]: I0229 02:41:56.123137    1672 reconciler.go:319] Volume detached for volume "ingress-nginx-token-r2b2j" (UniqueName: "kubernetes.io/secret/997fc213-5c70-4be9-a851-9e470b8f6fc7-ingress-nginx-token-r2b2j") on node "ingress-addon-legacy-080946" DevicePath ""
	
	
	==> storage-provisioner [866125e608c8a7db3e58cd728ce0a1b4f0a20253e0d3d65e45d053ca2f0f2727] <==
	I0229 02:38:39.179179       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 02:38:39.197101       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 02:38:39.197233       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 02:38:39.203782       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 02:38:39.204272       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25d665cf-84c9-4ec1-a58d-abc502fdfc50", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-080946_2b84faca-f738-47c5-93e9-fd0b3c0ce971 became leader
	I0229 02:38:39.204720       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-080946_2b84faca-f738-47c5-93e9-fd0b3c0ce971!
	I0229 02:38:39.305396       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-080946_2b84faca-f738-47c5-93e9-fd0b3c0ce971!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-080946 -n ingress-addon-legacy-080946
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-080946 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (182.13s)

                                                
                                    

Test pass (284/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 28.19
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
9 TestDownloadOnly/v1.16.0/DeleteAll 0.2
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 19.97
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.21
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 18.27
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.21
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.57
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 143.76
38 TestAddons/parallel/Registry 15.26
40 TestAddons/parallel/InspektorGadget 11.98
41 TestAddons/parallel/MetricsServer 6.62
44 TestAddons/parallel/CSI 68
45 TestAddons/parallel/Headlamp 14.11
46 TestAddons/parallel/CloudSpanner 5.69
47 TestAddons/parallel/LocalPath 8.55
48 TestAddons/parallel/NvidiaDevicePlugin 5.51
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.17
53 TestAddons/StoppedEnableDisable 12.21
54 TestCertOptions 38.75
55 TestCertExpiration 249.04
57 TestForceSystemdFlag 42.32
58 TestForceSystemdEnv 45.11
64 TestErrorSpam/setup 27.92
65 TestErrorSpam/start 0.77
66 TestErrorSpam/status 1.05
67 TestErrorSpam/pause 1.72
68 TestErrorSpam/unpause 1.95
69 TestErrorSpam/stop 1.46
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 46.8
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 29.21
76 TestFunctional/serial/KubeContext 0.06
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.52
81 TestFunctional/serial/CacheCmd/cache/add_local 1.09
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.94
86 TestFunctional/serial/CacheCmd/cache/delete 0.14
87 TestFunctional/serial/MinikubeKubectlCmd 0.15
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
91 TestFunctional/serial/LogsCmd 1.67
92 TestFunctional/serial/LogsFileCmd 1.69
93 TestFunctional/serial/InvalidService 4.34
95 TestFunctional/parallel/ConfigCmd 0.51
96 TestFunctional/parallel/DashboardCmd 15.3
97 TestFunctional/parallel/DryRun 0.44
98 TestFunctional/parallel/InternationalLanguage 0.21
99 TestFunctional/parallel/StatusCmd 1.37
103 TestFunctional/parallel/ServiceCmdConnect 10.62
104 TestFunctional/parallel/AddonsCmd 0.15
105 TestFunctional/parallel/PersistentVolumeClaim 24.71
107 TestFunctional/parallel/SSHCmd 0.98
108 TestFunctional/parallel/CpCmd 2.73
110 TestFunctional/parallel/FileSync 0.38
111 TestFunctional/parallel/CertSync 1.86
115 TestFunctional/parallel/NodeLabels 0.12
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.77
119 TestFunctional/parallel/License 0.37
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.52
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 8.23
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
133 TestFunctional/parallel/ProfileCmd/profile_list 0.42
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
135 TestFunctional/parallel/MountCmd/any-port 7.45
136 TestFunctional/parallel/ServiceCmd/List 0.51
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
139 TestFunctional/parallel/ServiceCmd/Format 0.45
140 TestFunctional/parallel/ServiceCmd/URL 0.56
141 TestFunctional/parallel/MountCmd/specific-port 2.63
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.1
143 TestFunctional/parallel/Version/short 0.08
144 TestFunctional/parallel/Version/components 1.28
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.82
150 TestFunctional/parallel/ImageCommands/Setup 2.61
151 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.77
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.37
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.54
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.9
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.65
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.44
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.94
161 TestFunctional/delete_addon-resizer_images 0.08
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestIngressAddonLegacy/StartLegacyK8sCluster 97.43
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.95
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.68
174 TestJSONOutput/start/Command 51.49
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 0.76
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 0.68
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 5.95
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.24
199 TestKicCustomNetwork/create_custom_network 40.25
200 TestKicCustomNetwork/use_default_bridge_network 32.56
201 TestKicExistingNetwork 33.02
202 TestKicCustomSubnet 31.74
203 TestKicStaticIP 33.56
204 TestMainNoArgs 0.06
205 TestMinikubeProfile 69.94
208 TestMountStart/serial/StartWithMountFirst 7.4
209 TestMountStart/serial/VerifyMountFirst 0.28
210 TestMountStart/serial/StartWithMountSecond 6.89
211 TestMountStart/serial/VerifyMountSecond 0.27
212 TestMountStart/serial/DeleteFirst 1.63
213 TestMountStart/serial/VerifyMountPostDelete 0.26
214 TestMountStart/serial/Stop 1.21
215 TestMountStart/serial/RestartStopped 7.99
216 TestMountStart/serial/VerifyMountPostStop 0.28
219 TestMultiNode/serial/FreshStart2Nodes 70.01
220 TestMultiNode/serial/DeployApp2Nodes 6.29
221 TestMultiNode/serial/PingHostFrom2Pods 1.04
222 TestMultiNode/serial/AddNode 19.61
223 TestMultiNode/serial/MultiNodeLabels 0.1
224 TestMultiNode/serial/ProfileList 0.32
225 TestMultiNode/serial/CopyFile 10.29
226 TestMultiNode/serial/StopNode 2.27
227 TestMultiNode/serial/StartAfterStop 11.72
228 TestMultiNode/serial/RestartKeepsNodes 119.13
229 TestMultiNode/serial/DeleteNode 4.98
230 TestMultiNode/serial/StopMultiNode 23.89
231 TestMultiNode/serial/RestartMultiNode 81.27
232 TestMultiNode/serial/ValidateNameConflict 33.58
237 TestPreload 168.54
239 TestScheduledStopUnix 110.03
242 TestInsufficientStorage 11.02
243 TestRunningBinaryUpgrade 76.81
245 TestKubernetesUpgrade 139.02
246 TestMissingContainerUpgrade 163.28
248 TestPause/serial/Start 58.96
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
251 TestNoKubernetes/serial/StartWithK8s 42.21
252 TestNoKubernetes/serial/StartWithStopK8s 6.75
253 TestNoKubernetes/serial/Start 6.54
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
255 TestNoKubernetes/serial/ProfileList 0.98
256 TestNoKubernetes/serial/Stop 1.22
257 TestNoKubernetes/serial/StartNoArgs 6.98
258 TestPause/serial/SecondStartNoReconfiguration 47.26
259 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
260 TestPause/serial/Pause 0.81
261 TestPause/serial/VerifyStatus 0.33
262 TestPause/serial/Unpause 0.81
263 TestPause/serial/PauseAgain 1
264 TestPause/serial/DeletePaused 4.32
265 TestPause/serial/VerifyDeletedResources 0.18
266 TestStoppedBinaryUpgrade/Setup 1.32
267 TestStoppedBinaryUpgrade/Upgrade 81.59
268 TestStoppedBinaryUpgrade/MinikubeLogs 2.09
283 TestNetworkPlugins/group/false 5.08
288 TestStartStop/group/old-k8s-version/serial/FirstStart 135.83
289 TestStartStop/group/old-k8s-version/serial/DeployApp 9.49
290 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.03
291 TestStartStop/group/old-k8s-version/serial/Stop 12.16
292 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
293 TestStartStop/group/old-k8s-version/serial/SecondStart 451.8
295 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.4
296 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
298 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.94
299 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
300 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 620.27
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
304 TestStartStop/group/old-k8s-version/serial/Pause 3.26
306 TestStartStop/group/embed-certs/serial/FirstStart 77.23
307 TestStartStop/group/embed-certs/serial/DeployApp 10.34
308 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.15
309 TestStartStop/group/embed-certs/serial/Stop 12.17
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
311 TestStartStop/group/embed-certs/serial/SecondStart 344.92
312 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
313 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
314 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
315 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.19
317 TestStartStop/group/no-preload/serial/FirstStart 65.9
318 TestStartStop/group/no-preload/serial/DeployApp 8.35
319 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
320 TestStartStop/group/no-preload/serial/Stop 11.96
321 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
322 TestStartStop/group/no-preload/serial/SecondStart 363.06
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
326 TestStartStop/group/embed-certs/serial/Pause 3.2
328 TestStartStop/group/newest-cni/serial/FirstStart 44.05
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.16
331 TestStartStop/group/newest-cni/serial/Stop 1.3
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
333 TestStartStop/group/newest-cni/serial/SecondStart 31.34
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
337 TestStartStop/group/newest-cni/serial/Pause 2.86
338 TestNetworkPlugins/group/auto/Start 49.73
339 TestNetworkPlugins/group/auto/KubeletFlags 0.33
340 TestNetworkPlugins/group/auto/NetCatPod 10.28
341 TestNetworkPlugins/group/auto/DNS 0.19
342 TestNetworkPlugins/group/auto/Localhost 0.15
343 TestNetworkPlugins/group/auto/HairPin 0.16
344 TestNetworkPlugins/group/kindnet/Start 50.11
345 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.64
347 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
348 TestNetworkPlugins/group/kindnet/DNS 0.24
349 TestNetworkPlugins/group/kindnet/Localhost 0.23
350 TestNetworkPlugins/group/kindnet/HairPin 0.25
351 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
352 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
353 TestNetworkPlugins/group/calico/Start 86.71
354 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
355 TestStartStop/group/no-preload/serial/Pause 4.23
356 TestNetworkPlugins/group/custom-flannel/Start 71.92
357 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
358 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.31
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/calico/KubeletFlags 0.29
361 TestNetworkPlugins/group/calico/NetCatPod 10.26
362 TestNetworkPlugins/group/custom-flannel/DNS 0.26
363 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
364 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
365 TestNetworkPlugins/group/calico/DNS 0.3
366 TestNetworkPlugins/group/calico/Localhost 0.25
367 TestNetworkPlugins/group/calico/HairPin 0.23
368 TestNetworkPlugins/group/enable-default-cni/Start 92.49
369 TestNetworkPlugins/group/flannel/Start 70.14
370 TestNetworkPlugins/group/flannel/ControllerPod 6.01
371 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
372 TestNetworkPlugins/group/flannel/NetCatPod 10.27
373 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
374 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
375 TestNetworkPlugins/group/flannel/DNS 0.19
376 TestNetworkPlugins/group/flannel/Localhost 0.16
377 TestNetworkPlugins/group/flannel/HairPin 0.15
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
381 TestNetworkPlugins/group/bridge/Start 83.21
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
383 TestNetworkPlugins/group/bridge/NetCatPod 11.26
384 TestNetworkPlugins/group/bridge/DNS 0.17
385 TestNetworkPlugins/group/bridge/Localhost 0.15
386 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.16.0/json-events (28.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-096542 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-096542 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (28.19363546s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (28.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-096542
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-096542: exit status 85 (80.219441ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-096542 | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |          |
	|         | -p download-only-096542        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:25:03
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:25:03.383960 1153663 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:25:03.384137 1153663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:25:03.384147 1153663 out.go:304] Setting ErrFile to fd 2...
	I0229 02:25:03.384152 1153663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:25:03.384449 1153663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
	W0229 02:25:03.384598 1153663 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18063-1148303/.minikube/config/config.json: open /home/jenkins/minikube-integration/18063-1148303/.minikube/config/config.json: no such file or directory
	I0229 02:25:03.385056 1153663 out.go:298] Setting JSON to true
	I0229 02:25:03.386061 1153663 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22050,"bootTime":1709151454,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0229 02:25:03.386134 1153663 start.go:139] virtualization:  
	I0229 02:25:03.389374 1153663 out.go:97] [download-only-096542] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0229 02:25:03.391444 1153663 out.go:169] MINIKUBE_LOCATION=18063
	W0229 02:25:03.389523 1153663 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball: no such file or directory
	I0229 02:25:03.389559 1153663 notify.go:220] Checking for updates...
	I0229 02:25:03.395084 1153663 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:25:03.397036 1153663 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	I0229 02:25:03.399022 1153663 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	I0229 02:25:03.400592 1153663 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0229 02:25:03.404209 1153663 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 02:25:03.404486 1153663 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:25:03.426349 1153663 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0229 02:25:03.426448 1153663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:25:03.494216 1153663 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-29 02:25:03.484947298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:25:03.494321 1153663 docker.go:295] overlay module found
	I0229 02:25:03.496737 1153663 out.go:97] Using the docker driver based on user configuration
	I0229 02:25:03.496764 1153663 start.go:299] selected driver: docker
	I0229 02:25:03.496771 1153663 start.go:903] validating driver "docker" against <nil>
	I0229 02:25:03.496881 1153663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:25:03.551842 1153663 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-29 02:25:03.543503878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:25:03.552068 1153663 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 02:25:03.552333 1153663 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0229 02:25:03.552558 1153663 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 02:25:03.554811 1153663 out.go:169] Using Docker driver with root privileges
	I0229 02:25:03.556490 1153663 cni.go:84] Creating CNI manager for ""
	I0229 02:25:03.556511 1153663 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0229 02:25:03.556521 1153663 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0229 02:25:03.556538 1153663 start_flags.go:323] config:
	{Name:download-only-096542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-096542 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:25:03.559074 1153663 out.go:97] Starting control plane node download-only-096542 in cluster download-only-096542
	I0229 02:25:03.559094 1153663 cache.go:121] Beginning downloading kic base image for docker with crio
	I0229 02:25:03.560959 1153663 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0229 02:25:03.560986 1153663 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:25:03.561061 1153663 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 02:25:03.575342 1153663 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0229 02:25:03.575516 1153663 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0229 02:25:03.575614 1153663 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0229 02:25:03.637737 1153663 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0229 02:25:03.637764 1153663 cache.go:56] Caching tarball of preloaded images
	I0229 02:25:03.637935 1153663 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:25:03.640610 1153663 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0229 02:25:03.640630 1153663 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0229 02:25:03.750797 1153663 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0229 02:25:09.393979 1153663 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-096542"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-096542
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (19.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-382285 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-382285 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (19.97267928s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (19.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-382285
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-382285: exit status 85 (83.415094ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-096542 | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | -p download-only-096542        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	| delete  | -p download-only-096542        | download-only-096542 | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	| start   | -o=json --download-only        | download-only-382285 | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | -p download-only-382285        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:25:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:25:31.997923 1153823 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:25:31.998199 1153823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:25:31.998230 1153823 out.go:304] Setting ErrFile to fd 2...
	I0229 02:25:31.998249 1153823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:25:31.998573 1153823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
	I0229 02:25:31.999208 1153823 out.go:298] Setting JSON to true
	I0229 02:25:32.000384 1153823 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22078,"bootTime":1709151454,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0229 02:25:32.000532 1153823 start.go:139] virtualization:  
	I0229 02:25:32.008585 1153823 out.go:97] [download-only-382285] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0229 02:25:32.010974 1153823 out.go:169] MINIKUBE_LOCATION=18063
	I0229 02:25:32.008896 1153823 notify.go:220] Checking for updates...
	I0229 02:25:32.015072 1153823 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:25:32.017845 1153823 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	I0229 02:25:32.019719 1153823 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	I0229 02:25:32.021775 1153823 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0229 02:25:32.026330 1153823 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 02:25:32.026617 1153823 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:25:32.050137 1153823 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0229 02:25:32.050248 1153823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:25:32.128135 1153823 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-29 02:25:32.118533753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:25:32.128250 1153823 docker.go:295] overlay module found
	I0229 02:25:32.130557 1153823 out.go:97] Using the docker driver based on user configuration
	I0229 02:25:32.130581 1153823 start.go:299] selected driver: docker
	I0229 02:25:32.130588 1153823 start.go:903] validating driver "docker" against <nil>
	I0229 02:25:32.130699 1153823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:25:32.185566 1153823 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-29 02:25:32.176955377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:25:32.185730 1153823 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 02:25:32.186051 1153823 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0229 02:25:32.186207 1153823 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 02:25:32.189003 1153823 out.go:169] Using Docker driver with root privileges
	I0229 02:25:32.191147 1153823 cni.go:84] Creating CNI manager for ""
	I0229 02:25:32.191168 1153823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0229 02:25:32.191179 1153823 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0229 02:25:32.191190 1153823 start_flags.go:323] config:
	{Name:download-only-382285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-382285 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:25:32.193373 1153823 out.go:97] Starting control plane node download-only-382285 in cluster download-only-382285
	I0229 02:25:32.193397 1153823 cache.go:121] Beginning downloading kic base image for docker with crio
	I0229 02:25:32.195614 1153823 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0229 02:25:32.195639 1153823 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:25:32.195797 1153823 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 02:25:32.210515 1153823 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0229 02:25:32.210632 1153823 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0229 02:25:32.210656 1153823 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0229 02:25:32.210662 1153823 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0229 02:25:32.210670 1153823 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0229 02:25:32.283497 1153823 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0229 02:25:32.283524 1153823 cache.go:56] Caching tarball of preloaded images
	I0229 02:25:32.284351 1153823 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:25:32.287622 1153823 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0229 02:25:32.287650 1153823 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0229 02:25:32.400888 1153823 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-382285"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-382285
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (18.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-400877 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-400877 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (18.271400782s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (18.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-400877
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-400877: exit status 85 (81.762652ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-096542 | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | -p download-only-096542           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	| delete  | -p download-only-096542           | download-only-096542 | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	| start   | -o=json --download-only           | download-only-382285 | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | -p download-only-382285           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	| delete  | -p download-only-382285           | download-only-382285 | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:25 UTC |
	| start   | -o=json --download-only           | download-only-400877 | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | -p download-only-400877           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:25:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:25:52.398849 1153981 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:25:52.398975 1153981 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:25:52.398986 1153981 out.go:304] Setting ErrFile to fd 2...
	I0229 02:25:52.398993 1153981 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:25:52.399268 1153981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
	I0229 02:25:52.399666 1153981 out.go:298] Setting JSON to true
	I0229 02:25:52.400542 1153981 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22099,"bootTime":1709151454,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0229 02:25:52.400607 1153981 start.go:139] virtualization:  
	I0229 02:25:52.403237 1153981 out.go:97] [download-only-400877] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0229 02:25:52.405457 1153981 out.go:169] MINIKUBE_LOCATION=18063
	I0229 02:25:52.403444 1153981 notify.go:220] Checking for updates...
	I0229 02:25:52.409458 1153981 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:25:52.411148 1153981 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	I0229 02:25:52.413197 1153981 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	I0229 02:25:52.415153 1153981 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0229 02:25:52.418937 1153981 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 02:25:52.419221 1153981 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:25:52.440757 1153981 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0229 02:25:52.440869 1153981 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:25:52.508578 1153981 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-29 02:25:52.499482334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:25:52.508688 1153981 docker.go:295] overlay module found
	I0229 02:25:52.510958 1153981 out.go:97] Using the docker driver based on user configuration
	I0229 02:25:52.510985 1153981 start.go:299] selected driver: docker
	I0229 02:25:52.510992 1153981 start.go:903] validating driver "docker" against <nil>
	I0229 02:25:52.511113 1153981 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:25:52.571015 1153981 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-29 02:25:52.561512276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:25:52.571194 1153981 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 02:25:52.571481 1153981 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0229 02:25:52.571637 1153981 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 02:25:52.574469 1153981 out.go:169] Using Docker driver with root privileges
	I0229 02:25:52.576593 1153981 cni.go:84] Creating CNI manager for ""
	I0229 02:25:52.576613 1153981 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0229 02:25:52.576624 1153981 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0229 02:25:52.576639 1153981 start_flags.go:323] config:
	{Name:download-only-400877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-400877 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:25:52.578885 1153981 out.go:97] Starting control plane node download-only-400877 in cluster download-only-400877
	I0229 02:25:52.578903 1153981 cache.go:121] Beginning downloading kic base image for docker with crio
	I0229 02:25:52.581678 1153981 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0229 02:25:52.581704 1153981 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 02:25:52.581874 1153981 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 02:25:52.596000 1153981 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0229 02:25:52.596127 1153981 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0229 02:25:52.596146 1153981 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0229 02:25:52.596152 1153981 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0229 02:25:52.596159 1153981 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0229 02:25:52.648192 1153981 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0229 02:25:52.648218 1153981 cache.go:56] Caching tarball of preloaded images
	I0229 02:25:52.649057 1153981 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 02:25:52.651249 1153981 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0229 02:25:52.651270 1153981 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0229 02:25:52.762917 1153981 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:9d8119c6fd5c58f71de57a6fdbe27bf3 -> /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0229 02:26:06.945417 1153981 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0229 02:26:06.945526 1153981 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18063-1148303/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-400877"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-400877
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-495072 --alsologtostderr --binary-mirror http://127.0.0.1:36415 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-495072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-495072
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-847636
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-847636: exit status 85 (79.018451ms)

                                                
                                                
-- stdout --
	* Profile "addons-847636" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-847636"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-847636
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-847636: exit status 85 (79.883281ms)

                                                
                                                
-- stdout --
	* Profile "addons-847636" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-847636"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (143.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-847636 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-847636 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m23.759710464s)
--- PASS: TestAddons/Setup (143.76s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 42.435655ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-swh5m" [7f7cef7c-c308-440d-9af0-a62ea6ff5afc] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004785662s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-28lz9" [5a37001f-bc0a-46ed-8c02-1be6d3b01226] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004573439s
addons_test.go:340: (dbg) Run:  kubectl --context addons-847636 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-847636 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-847636 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.085618051s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-847636 ip
2024/02/29 02:28:50 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-847636 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.26s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-htfxp" [bd1aa643-7da2-4301-834a-034396d91594] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00432362s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-847636
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-847636: (5.976466607s)
--- PASS: TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 27.591507ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-xbb8t" [402b0b1e-9934-4ad8-b735-7b56a9bdab20] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012054711s
addons_test.go:415: (dbg) Run:  kubectl --context addons-847636 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-847636 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-arm64 -p addons-847636 addons disable metrics-server --alsologtostderr -v=1: (1.428325915s)
--- PASS: TestAddons/parallel/MetricsServer (6.62s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 42.153467ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-847636 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-847636 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b5fb4a72-a062-41b2-92b3-2baa6fe50a47] Pending
helpers_test.go:344: "task-pv-pod" [b5fb4a72-a062-41b2-92b3-2baa6fe50a47] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b5fb4a72-a062-41b2-92b3-2baa6fe50a47] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003978201s
addons_test.go:584: (dbg) Run:  kubectl --context addons-847636 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-847636 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-847636 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-847636 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-847636 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-847636 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-847636 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ab6dc918-a4f8-4e13-a1fa-09a802fea61b] Pending
helpers_test.go:344: "task-pv-pod-restore" [ab6dc918-a4f8-4e13-a1fa-09a802fea61b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ab6dc918-a4f8-4e13-a1fa-09a802fea61b] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004569043s
addons_test.go:626: (dbg) Run:  kubectl --context addons-847636 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-847636 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-847636 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-847636 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-847636 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.014186979s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-847636 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.00s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-847636 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-847636 --alsologtostderr -v=1: (2.105653131s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-97qj6" [0678b12f-b365-49dc-86e2-98c6327f0a03] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-97qj6" [0678b12f-b365-49dc-86e2-98c6327f0a03] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-97qj6" [0678b12f-b365-49dc-86e2-98c6327f0a03] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003604679s
--- PASS: TestAddons/parallel/Headlamp (14.11s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-75bvr" [643c34ad-8798-452c-9b98-b494d6e7152c] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003522077s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-847636
--- PASS: TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.55s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-847636 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-847636 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-847636 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1a221adb-3956-4f7b-9e8c-fe569ab8fdd8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1a221adb-3956-4f7b-9e8c-fe569ab8fdd8] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1a221adb-3956-4f7b-9e8c-fe569ab8fdd8] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003702455s
addons_test.go:891: (dbg) Run:  kubectl --context addons-847636 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-847636 ssh "cat /opt/local-path-provisioner/pvc-1ea50d5e-da40-42ee-8a27-66caf9ac73b4_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-847636 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-847636 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-847636 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.55s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-m48gh" [31959e7d-f3ce-4e2a-a4ce-26c05ed98a5f] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00426102s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-847636
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-grgbx" [bfa62335-8d0e-4121-85de-a2a166552222] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003855109s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-847636 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-847636 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-847636
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-847636: (11.907740065s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-847636
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-847636
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-847636
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (38.75s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-499513 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-499513 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.04379891s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-499513 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-499513 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-499513 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-499513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-499513
E0229 03:06:01.529643 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-499513: (2.015333205s)
--- PASS: TestCertOptions (38.75s)

                                                
                                    
x
+
TestCertExpiration (249.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-945996 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-945996 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (35.943914114s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-945996 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0229 03:08:36.508620 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-945996 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (30.798420394s)
helpers_test.go:175: Cleaning up "cert-expiration-945996" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-945996
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-945996: (2.300746514s)
--- PASS: TestCertExpiration (249.04s)

                                                
                                    
x
+
TestForceSystemdFlag (42.32s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-554808 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-554808 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.479549748s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-554808 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-554808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-554808
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-554808: (2.42986793s)
--- PASS: TestForceSystemdFlag (42.32s)

                                                
                                    
x
+
TestForceSystemdEnv (45.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-734203 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-734203 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.389789312s)
helpers_test.go:175: Cleaning up "force-systemd-env-734203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-734203
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-734203: (2.717720689s)
--- PASS: TestForceSystemdEnv (45.11s)

                                                
                                    
x
+
TestErrorSpam/setup (27.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-870964 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-870964 --driver=docker  --container-runtime=crio
E0229 02:33:36.507428 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
E0229 02:33:36.514109 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
E0229 02:33:36.524551 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
E0229 02:33:36.548614 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
E0229 02:33:36.588870 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
E0229 02:33:36.669761 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-870964 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-870964 --driver=docker  --container-runtime=crio: (27.922445208s)
--- PASS: TestErrorSpam/setup (27.92s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 start --dry-run
E0229 02:33:36.830833 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 start --dry-run
E0229 02:33:37.151563 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 status
E0229 02:33:37.791772 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 status
--- PASS: TestErrorSpam/status (1.05s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 pause
E0229 02:33:39.072005 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.95s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 unpause
E0229 02:33:41.632271 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
--- PASS: TestErrorSpam/unpause (1.95s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 stop: (1.261263015s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-870964 --log_dir /tmp/nospam-870964 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18063-1148303/.minikube/files/etc/test/nested/copy/1153658/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-552840 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0229 02:33:56.993693 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
E0229 02:34:17.473930 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-552840 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (46.798254822s)
--- PASS: TestFunctional/serial/StartWithProxy (46.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-552840 --alsologtostderr -v=8
E0229 02:34:58.435710 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-552840 --alsologtostderr -v=8: (29.210339633s)
functional_test.go:659: soft start took 29.212996374s for "functional-552840" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-552840 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-552840 cache add registry.k8s.io/pause:3.1: (1.222000228s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-552840 cache add registry.k8s.io/pause:3.3: (1.192140506s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-552840 cache add registry.k8s.io/pause:latest: (1.110265914s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-552840 /tmp/TestFunctionalserialCacheCmdcacheadd_local2268495345/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 cache add minikube-local-cache-test:functional-552840
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 cache delete minikube-local-cache-test:functional-552840
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-552840
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-552840 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (332.088044ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 kubectl -- --context functional-552840 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-552840 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-552840 logs: (1.665144079s)
--- PASS: TestFunctional/serial/LogsCmd (1.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 logs --file /tmp/TestFunctionalserialLogsFileCmd295631237/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-552840 logs --file /tmp/TestFunctionalserialLogsFileCmd295631237/001/logs.txt: (1.692324367s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-552840 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-552840
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-552840: exit status 115 (767.281672ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30558 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-552840 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-552840 config get cpus: exit status 14 (94.396327ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-552840 config get cpus: exit status 14 (93.579043ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-552840 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-552840 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1179750: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.30s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-552840 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-552840 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (190.70798ms)

                                                
                                                
-- stdout --
	* [functional-552840] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 02:36:36.301778 1179104 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:36:36.301947 1179104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:36:36.301975 1179104 out.go:304] Setting ErrFile to fd 2...
	I0229 02:36:36.301995 1179104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:36:36.302282 1179104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
	I0229 02:36:36.302672 1179104 out.go:298] Setting JSON to false
	I0229 02:36:36.303660 1179104 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22743,"bootTime":1709151454,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0229 02:36:36.303745 1179104 start.go:139] virtualization:  
	I0229 02:36:36.306870 1179104 out.go:177] * [functional-552840] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0229 02:36:36.310241 1179104 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:36:36.312787 1179104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:36:36.310685 1179104 notify.go:220] Checking for updates...
	I0229 02:36:36.315224 1179104 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	I0229 02:36:36.317892 1179104 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	I0229 02:36:36.320544 1179104 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0229 02:36:36.323335 1179104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:36:36.326291 1179104 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:36:36.326809 1179104 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:36:36.348296 1179104 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0229 02:36:36.348414 1179104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:36:36.416102 1179104 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-29 02:36:36.406346832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:36:36.416223 1179104 docker.go:295] overlay module found
	I0229 02:36:36.419076 1179104 out.go:177] * Using the docker driver based on existing profile
	I0229 02:36:36.421694 1179104 start.go:299] selected driver: docker
	I0229 02:36:36.421713 1179104 start.go:903] validating driver "docker" against &{Name:functional-552840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-552840 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:36:36.421804 1179104 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:36:36.425072 1179104 out.go:177] 
	W0229 02:36:36.427653 1179104 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0229 02:36:36.430139 1179104 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-552840 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-552840 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-552840 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (210.343438ms)

                                                
                                                
-- stdout --
	* [functional-552840] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 02:36:36.102875 1179064 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:36:36.103133 1179064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:36:36.103163 1179064 out.go:304] Setting ErrFile to fd 2...
	I0229 02:36:36.103182 1179064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:36:36.104235 1179064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
	I0229 02:36:36.104687 1179064 out.go:298] Setting JSON to false
	I0229 02:36:36.105682 1179064 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22743,"bootTime":1709151454,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0229 02:36:36.105785 1179064 start.go:139] virtualization:  
	I0229 02:36:36.109080 1179064 out.go:177] * [functional-552840] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0229 02:36:36.112283 1179064 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:36:36.114367 1179064 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:36:36.112329 1179064 notify.go:220] Checking for updates...
	I0229 02:36:36.117391 1179064 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	I0229 02:36:36.120099 1179064 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	I0229 02:36:36.122441 1179064 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0229 02:36:36.124801 1179064 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:36:36.127758 1179064 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:36:36.128386 1179064 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:36:36.149652 1179064 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0229 02:36:36.149783 1179064 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:36:36.226516 1179064 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-29 02:36:36.216510042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:36:36.226622 1179064 docker.go:295] overlay module found
	I0229 02:36:36.229151 1179064 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0229 02:36:36.231749 1179064 start.go:299] selected driver: docker
	I0229 02:36:36.231766 1179064 start.go:903] validating driver "docker" against &{Name:functional-552840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-552840 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:36:36.231879 1179064 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:36:36.234836 1179064 out.go:177] 
	W0229 02:36:36.237347 1179064 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0229 02:36:36.239903 1179064 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-552840 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-552840 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-xvlq2" [c9465509-fb52-4023-b693-134949e13387] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-xvlq2" [c9465509-fb52-4023-b693-134949e13387] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004282574s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31660
functional_test.go:1671: http://192.168.49.2:31660: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-xvlq2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31660
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c483ab17-00b8-4481-8ee4-310705be977b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005887232s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-552840 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-552840 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-552840 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-552840 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [52643ebd-f657-4df0-9002-defce94065ff] Pending
helpers_test.go:344: "sp-pod" [52643ebd-f657-4df0-9002-defce94065ff] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [52643ebd-f657-4df0-9002-defce94065ff] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004048602s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-552840 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-552840 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-552840 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [48d308b2-b1da-494d-b68e-07620f4e7b1c] Pending
helpers_test.go:344: "sp-pod" [48d308b2-b1da-494d-b68e-07620f4e7b1c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0229 02:36:20.356123 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [48d308b2-b1da-494d-b68e-07620f4e7b1c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004231635s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-552840 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh -n functional-552840 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 cp functional-552840:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1275858190/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh -n functional-552840 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh -n functional-552840 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1153658/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "sudo cat /etc/test/nested/copy/1153658/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1153658.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "sudo cat /etc/ssl/certs/1153658.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1153658.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "sudo cat /usr/share/ca-certificates/1153658.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/11536582.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "sudo cat /etc/ssl/certs/11536582.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/11536582.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "sudo cat /usr/share/ca-certificates/11536582.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-552840 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-552840 ssh "sudo systemctl is-active docker": exit status 1 (364.462404ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-552840 ssh "sudo systemctl is-active containerd": exit status 1 (404.689792ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-552840 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-552840 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-552840 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-552840 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1176876: os: process already finished
helpers_test.go:502: unable to terminate pid 1176720: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-552840 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-552840 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0fae2d39-4b81-425b-92ce-3fc17193485b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0fae2d39-4b81-425b-92ce-3fc17193485b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003481609s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-552840 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.212.67 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-552840 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-552840 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-552840 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-2bj2m" [cde26134-4a31-4a5c-badd-b7744ed2c2a3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-2bj2m" [cde26134-4a31-4a5c-badd-b7744ed2c2a3] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004231451s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "341.162677ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "74.992787ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "332.931954ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "69.028242ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-552840 /tmp/TestFunctionalparallelMountCmdany-port2750271748/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709174188012899313" to /tmp/TestFunctionalparallelMountCmdany-port2750271748/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709174188012899313" to /tmp/TestFunctionalparallelMountCmdany-port2750271748/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709174188012899313" to /tmp/TestFunctionalparallelMountCmdany-port2750271748/001/test-1709174188012899313
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-552840 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (382.258434ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 29 02:36 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 29 02:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 29 02:36 test-1709174188012899313
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh cat /mount-9p/test-1709174188012899313
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-552840 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [154fab3c-0153-468d-828c-08c1287ec3d8] Pending
helpers_test.go:344: "busybox-mount" [154fab3c-0153-468d-828c-08c1287ec3d8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [154fab3c-0153-468d-828c-08c1287ec3d8] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [154fab3c-0153-468d-828c-08c1287ec3d8] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004704321s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-552840 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-552840 /tmp/TestFunctionalparallelMountCmdany-port2750271748/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 service list -o json
functional_test.go:1490: Took "513.2314ms" to run "out/minikube-linux-arm64 -p functional-552840 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30978
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30978
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-552840 /tmp/TestFunctionalparallelMountCmdspecific-port225458537/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-552840 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (510.935943ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-552840 /tmp/TestFunctionalparallelMountCmdspecific-port225458537/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-552840 ssh "sudo umount -f /mount-9p": exit status 1 (372.133453ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-552840 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-552840 /tmp/TestFunctionalparallelMountCmdspecific-port225458537/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-552840 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3870442/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-552840 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3870442/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-552840 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3870442/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-552840 ssh "findmnt -T" /mount1: (1.149600419s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-552840 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-552840 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3870442/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-552840 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3870442/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-552840 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3870442/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-552840 version -o=json --components: (1.278860214s)
--- PASS: TestFunctional/parallel/Version/components (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-552840 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-552840
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240202-8f1494ea
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-552840 image ls --format short --alsologtostderr:
I0229 02:37:02.747128 1181250 out.go:291] Setting OutFile to fd 1 ...
I0229 02:37:02.747249 1181250 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 02:37:02.747293 1181250 out.go:304] Setting ErrFile to fd 2...
I0229 02:37:02.747305 1181250 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 02:37:02.747569 1181250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
I0229 02:37:02.748333 1181250 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 02:37:02.748498 1181250 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 02:37:02.749020 1181250 cli_runner.go:164] Run: docker container inspect functional-552840 --format={{.State.Status}}
I0229 02:37:02.765921 1181250 ssh_runner.go:195] Run: systemctl --version
I0229 02:37:02.765967 1181250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
I0229 02:37:02.800404 1181250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/functional-552840/id_rsa Username:docker}
I0229 02:37:02.899513 1181250 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-552840 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 760b7cbba31e1 | 196MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/google-containers/addon-resizer  | functional-552840  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| docker.io/library/nginx                 | alpine             | be5e6f23a9904 | 45.4MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4740c1948d3fc | 60.9MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-552840 image ls --format table --alsologtostderr:
I0229 02:37:03.315309 1181377 out.go:291] Setting OutFile to fd 1 ...
I0229 02:37:03.315428 1181377 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 02:37:03.315433 1181377 out.go:304] Setting ErrFile to fd 2...
I0229 02:37:03.315437 1181377 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 02:37:03.315748 1181377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
I0229 02:37:03.316508 1181377 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 02:37:03.316630 1181377 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 02:37:03.317365 1181377 cli_runner.go:164] Run: docker container inspect functional-552840 --format={{.State.Status}}
I0229 02:37:03.347669 1181377 ssh_runner.go:195] Run: systemctl --version
I0229 02:37:03.347770 1181377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
I0229 02:37:03.370172 1181377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/functional-552840/id_rsa Username:docker}
I0229 02:37:03.464766 1181377 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-552840 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/
coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"},{"id":"4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"60940831"},{"id":"760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676","repoDigests":["docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107","docker.io/library/ngi
nx@sha256:d5ec359034df4b326b8b5f0efa26dbd8742d166161b7edb37321b795c8fe5f48"],"repoTags":["docker.io/library/nginx:latest"],"size":"196117996"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0
602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-552840"],"size":"34114467"},{"id":"9cdd6470f48c8b12753
0b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registry.k8s.io/kube-s
cheduler:v1.28.4"],"size":"59253556"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests":["docker.io/library/nginx@sha256:34aa0a372d3220dc0448131f809c72d8085f79bdec8058ad6970fc034a395674","docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45393258"},{"id":"72565bf5bbed
fb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-552840 image ls --format json --alsologtostderr:
I0229 02:37:03.049060 1181310 out.go:291] Setting OutFile to fd 1 ...
I0229 02:37:03.049280 1181310 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 02:37:03.049310 1181310 out.go:304] Setting ErrFile to fd 2...
I0229 02:37:03.049331 1181310 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 02:37:03.049625 1181310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
I0229 02:37:03.050399 1181310 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 02:37:03.050571 1181310 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 02:37:03.051154 1181310 cli_runner.go:164] Run: docker container inspect functional-552840 --format={{.State.Status}}
I0229 02:37:03.074589 1181310 ssh_runner.go:195] Run: systemctl --version
I0229 02:37:03.074645 1181310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
I0229 02:37:03.102901 1181310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/functional-552840/id_rsa Username:docker}
I0229 02:37:03.192248 1181310 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-552840 image ls --format yaml --alsologtostderr:
- id: 4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "60940831"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests:
- docker.io/library/nginx@sha256:34aa0a372d3220dc0448131f809c72d8085f79bdec8058ad6970fc034a395674
- docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9
repoTags:
- docker.io/library/nginx:alpine
size: "45393258"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-552840
size: "34114467"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676
repoDigests:
- docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107
- docker.io/library/nginx@sha256:d5ec359034df4b326b8b5f0efa26dbd8742d166161b7edb37321b795c8fe5f48
repoTags:
- docker.io/library/nginx:latest
size: "196117996"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-552840 image ls --format yaml --alsologtostderr:
I0229 02:37:02.745342 1181249 out.go:291] Setting OutFile to fd 1 ...
I0229 02:37:02.745576 1181249 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 02:37:02.745606 1181249 out.go:304] Setting ErrFile to fd 2...
I0229 02:37:02.745625 1181249 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 02:37:02.745893 1181249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
I0229 02:37:02.746548 1181249 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 02:37:02.746721 1181249 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 02:37:02.747382 1181249 cli_runner.go:164] Run: docker container inspect functional-552840 --format={{.State.Status}}
I0229 02:37:02.765899 1181249 ssh_runner.go:195] Run: systemctl --version
I0229 02:37:02.765948 1181249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
I0229 02:37:02.783307 1181249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/functional-552840/id_rsa Username:docker}
I0229 02:37:02.876367 1181249 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-552840 ssh pgrep buildkitd: exit status 1 (394.307619ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image build -t localhost/my-image:functional-552840 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-552840 image build -t localhost/my-image:functional-552840 testdata/build --alsologtostderr: (2.178896476s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-552840 image build -t localhost/my-image:functional-552840 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 97f6aa19bfb
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-552840
--> 5c42e486b0e
Successfully tagged localhost/my-image:functional-552840
5c42e486b0e752a4b6be29bcf0f64ea6f1913c1b4d789a37bc10cddd17abf4d6
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-552840 image build -t localhost/my-image:functional-552840 testdata/build --alsologtostderr:
I0229 02:37:03.425806 1181391 out.go:291] Setting OutFile to fd 1 ...
I0229 02:37:03.426564 1181391 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 02:37:03.426602 1181391 out.go:304] Setting ErrFile to fd 2...
I0229 02:37:03.426627 1181391 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 02:37:03.426977 1181391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
I0229 02:37:03.427779 1181391 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 02:37:03.429238 1181391 config.go:182] Loaded profile config "functional-552840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 02:37:03.429975 1181391 cli_runner.go:164] Run: docker container inspect functional-552840 --format={{.State.Status}}
I0229 02:37:03.448056 1181391 ssh_runner.go:195] Run: systemctl --version
I0229 02:37:03.448119 1181391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-552840
I0229 02:37:03.471613 1181391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/functional-552840/id_rsa Username:docker}
I0229 02:37:03.572421 1181391 build_images.go:151] Building image from path: /tmp/build.2972041187.tar
I0229 02:37:03.572538 1181391 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0229 02:37:03.582297 1181391 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2972041187.tar
I0229 02:37:03.585552 1181391 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2972041187.tar: stat -c "%s %y" /var/lib/minikube/build/build.2972041187.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2972041187.tar': No such file or directory
I0229 02:37:03.585582 1181391 ssh_runner.go:362] scp /tmp/build.2972041187.tar --> /var/lib/minikube/build/build.2972041187.tar (3072 bytes)
I0229 02:37:03.609304 1181391 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2972041187
I0229 02:37:03.618100 1181391 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2972041187 -xf /var/lib/minikube/build/build.2972041187.tar
I0229 02:37:03.627419 1181391 crio.go:297] Building image: /var/lib/minikube/build/build.2972041187
I0229 02:37:03.627514 1181391 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-552840 /var/lib/minikube/build/build.2972041187 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0229 02:37:05.479405 1181391 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-552840 /var/lib/minikube/build/build.2972041187 --cgroup-manager=cgroupfs: (1.851859484s)
I0229 02:37:05.479491 1181391 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2972041187
I0229 02:37:05.488268 1181391 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2972041187.tar
I0229 02:37:05.497210 1181391 build_images.go:207] Built localhost/my-image:functional-552840 from /tmp/build.2972041187.tar
I0229 02:37:05.497242 1181391 build_images.go:123] succeeded building to: functional-552840
I0229 02:37:05.497248 1181391 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.575017966s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-552840
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image load --daemon gcr.io/google-containers/addon-resizer:functional-552840 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-552840 image load --daemon gcr.io/google-containers/addon-resizer:functional-552840 --alsologtostderr: (4.525270094s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image load --daemon gcr.io/google-containers/addon-resizer:functional-552840 --alsologtostderr
2024/02/29 02:36:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-552840 image load --daemon gcr.io/google-containers/addon-resizer:functional-552840 --alsologtostderr: (3.085304277s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.550033564s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-552840
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image load --daemon gcr.io/google-containers/addon-resizer:functional-552840 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-552840 image load --daemon gcr.io/google-containers/addon-resizer:functional-552840 --alsologtostderr: (3.729983004s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image save gcr.io/google-containers/addon-resizer:functional-552840 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image rm gcr.io/google-containers/addon-resizer:functional-552840 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-552840 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.192134502s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-552840
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-552840 image save --daemon gcr.io/google-containers/addon-resizer:functional-552840 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-552840
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-552840
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-552840
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-552840
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (97.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-080946 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0229 02:38:36.506599 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-080946 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m37.431547587s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (97.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.95s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-080946 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-080946 addons enable ingress --alsologtostderr -v=5: (11.951924224s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.95s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.68s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-080946 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.49s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-873372 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0229 02:42:23.451482 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-873372 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (51.485622466s)
--- PASS: TestJSONOutput/start/Command (51.49s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-873372 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-873372 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-873372 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-873372 --output=json --user=testUser: (5.947580361s)
--- PASS: TestJSONOutput/stop/Command (5.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-422846 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-422846 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.901346ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8db6dcfd-328b-4a7d-b5f0-cd9692a90840","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-422846] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f4445c7-c79f-493c-b875-51a319b0fbf7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18063"}}
	{"specversion":"1.0","id":"1f91c38b-6b42-4cf3-b375-152d8c042c6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a9c118b1-443d-46f6-8181-18736dd61fce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig"}}
	{"specversion":"1.0","id":"c58cd55d-aaac-4375-8199-7380580260db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube"}}
	{"specversion":"1.0","id":"e4d827f6-91d5-4509-a9f4-73a859a04893","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3ef9385d-2ff3-4b63-950a-1dff6e06cfbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d4d8dee5-847b-4f5e-8a5e-e1405144992c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-422846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-422846
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-420189 --network=
E0229 02:43:36.506530 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-420189 --network=: (38.144602196s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-420189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-420189
E0229 02:43:45.372111 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-420189: (2.074836315s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.25s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.56s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-929761 --network=bridge
E0229 02:43:58.722975 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
E0229 02:43:58.728237 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
E0229 02:43:58.738486 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
E0229 02:43:58.758740 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
E0229 02:43:58.799014 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
E0229 02:43:58.879281 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
E0229 02:43:59.039656 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
E0229 02:43:59.360198 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
E0229 02:44:00.005490 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
E0229 02:44:01.285724 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
E0229 02:44:03.846929 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
E0229 02:44:08.967208 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-929761 --network=bridge: (30.582566869s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-929761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-929761
E0229 02:44:19.207437 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-929761: (1.95559353s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.56s)

                                                
                                    
x
+
TestKicExistingNetwork (33.02s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-729965 --network=existing-network
E0229 02:44:39.687676 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-729965 --network=existing-network: (30.958048009s)
helpers_test.go:175: Cleaning up "existing-network-729965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-729965
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-729965: (1.913117582s)
--- PASS: TestKicExistingNetwork (33.02s)

                                                
                                    
x
+
TestKicCustomSubnet (31.74s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-737350 --subnet=192.168.60.0/24
E0229 02:45:20.647885 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-737350 --subnet=192.168.60.0/24: (29.698806862s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-737350 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-737350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-737350
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-737350: (2.018590014s)
--- PASS: TestKicCustomSubnet (31.74s)

                                                
                                    
x
+
TestKicStaticIP (33.56s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-555847 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-555847 --static-ip=192.168.200.200: (31.368727975s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-555847 ip
helpers_test.go:175: Cleaning up "static-ip-555847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-555847
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-555847: (2.02690578s)
--- PASS: TestKicStaticIP (33.56s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (69.94s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-177505 --driver=docker  --container-runtime=crio
E0229 02:46:01.529429 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 02:46:29.212890 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-177505 --driver=docker  --container-runtime=crio: (31.314570799s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-180360 --driver=docker  --container-runtime=crio
E0229 02:46:42.568106 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-180360 --driver=docker  --container-runtime=crio: (33.191880384s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-177505
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-180360
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-180360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-180360
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-180360: (1.930533994s)
helpers_test.go:175: Cleaning up "first-177505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-177505
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-177505: (2.271822593s)
--- PASS: TestMinikubeProfile (69.94s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-074009 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-074009 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.39703928s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-074009 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-087802 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-087802 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.892131149s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-087802 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-074009 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-074009 --alsologtostderr -v=5: (1.624871947s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-087802 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-087802
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-087802: (1.21253482s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.99s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-087802
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-087802: (6.988740677s)
--- PASS: TestMountStart/serial/RestartStopped (7.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-087802 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-955565 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0229 02:48:36.506722 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-955565 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m9.506391535s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-955565 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-955565 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-955565 -- rollout status deployment/busybox: (4.241793379s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-955565 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-955565 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-955565 -- exec busybox-5b5d89c9d6-8sdpg -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-955565 -- exec busybox-5b5d89c9d6-p46wc -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-955565 -- exec busybox-5b5d89c9d6-8sdpg -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-955565 -- exec busybox-5b5d89c9d6-p46wc -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-955565 -- exec busybox-5b5d89c9d6-8sdpg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-955565 -- exec busybox-5b5d89c9d6-p46wc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.29s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-955565 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-955565 -- exec busybox-5b5d89c9d6-8sdpg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-955565 -- exec busybox-5b5d89c9d6-8sdpg -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-955565 -- exec busybox-5b5d89c9d6-p46wc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-955565 -- exec busybox-5b5d89c9d6-p46wc -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-955565 -v 3 --alsologtostderr
E0229 02:48:58.723284 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-955565 -v 3 --alsologtostderr: (18.932120999s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.61s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-955565 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 cp testdata/cp-test.txt multinode-955565:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 cp multinode-955565:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile39118383/001/cp-test_multinode-955565.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 cp multinode-955565:/home/docker/cp-test.txt multinode-955565-m02:/home/docker/cp-test_multinode-955565_multinode-955565-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565-m02 "sudo cat /home/docker/cp-test_multinode-955565_multinode-955565-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 cp multinode-955565:/home/docker/cp-test.txt multinode-955565-m03:/home/docker/cp-test_multinode-955565_multinode-955565-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565-m03 "sudo cat /home/docker/cp-test_multinode-955565_multinode-955565-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 cp testdata/cp-test.txt multinode-955565-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 cp multinode-955565-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile39118383/001/cp-test_multinode-955565-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 cp multinode-955565-m02:/home/docker/cp-test.txt multinode-955565:/home/docker/cp-test_multinode-955565-m02_multinode-955565.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565 "sudo cat /home/docker/cp-test_multinode-955565-m02_multinode-955565.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 cp multinode-955565-m02:/home/docker/cp-test.txt multinode-955565-m03:/home/docker/cp-test_multinode-955565-m02_multinode-955565-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565-m03 "sudo cat /home/docker/cp-test_multinode-955565-m02_multinode-955565-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 cp testdata/cp-test.txt multinode-955565-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 cp multinode-955565-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile39118383/001/cp-test_multinode-955565-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 cp multinode-955565-m03:/home/docker/cp-test.txt multinode-955565:/home/docker/cp-test_multinode-955565-m03_multinode-955565.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565 "sudo cat /home/docker/cp-test_multinode-955565-m03_multinode-955565.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 cp multinode-955565-m03:/home/docker/cp-test.txt multinode-955565-m02:/home/docker/cp-test_multinode-955565-m03_multinode-955565-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 ssh -n multinode-955565-m02 "sudo cat /home/docker/cp-test_multinode-955565-m03_multinode-955565-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-955565 node stop m03: (1.235659256s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-955565 status: exit status 7 (506.086475ms)

                                                
                                                
-- stdout --
	multinode-955565
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-955565-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-955565-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-955565 status --alsologtostderr: exit status 7 (522.72423ms)

                                                
                                                
-- stdout --
	multinode-955565
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-955565-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-955565-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 02:49:25.770324 1227792 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:49:25.770467 1227792 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:49:25.770477 1227792 out.go:304] Setting ErrFile to fd 2...
	I0229 02:49:25.770484 1227792 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:49:25.770741 1227792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
	I0229 02:49:25.770924 1227792 out.go:298] Setting JSON to false
	I0229 02:49:25.770950 1227792 mustload.go:65] Loading cluster: multinode-955565
	I0229 02:49:25.770998 1227792 notify.go:220] Checking for updates...
	I0229 02:49:25.771400 1227792 config.go:182] Loaded profile config "multinode-955565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:49:25.771421 1227792 status.go:255] checking status of multinode-955565 ...
	I0229 02:49:25.771918 1227792 cli_runner.go:164] Run: docker container inspect multinode-955565 --format={{.State.Status}}
	I0229 02:49:25.795012 1227792 status.go:330] multinode-955565 host status = "Running" (err=<nil>)
	I0229 02:49:25.795035 1227792 host.go:66] Checking if "multinode-955565" exists ...
	I0229 02:49:25.795436 1227792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-955565
	I0229 02:49:25.811740 1227792 host.go:66] Checking if "multinode-955565" exists ...
	I0229 02:49:25.812202 1227792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 02:49:25.812263 1227792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-955565
	I0229 02:49:25.839168 1227792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34112 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/multinode-955565/id_rsa Username:docker}
	I0229 02:49:25.929163 1227792 ssh_runner.go:195] Run: systemctl --version
	I0229 02:49:25.933425 1227792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:49:25.944718 1227792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 02:49:26.017059 1227792 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:66 SystemTime:2024-02-29 02:49:25.996579494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 02:49:26.017685 1227792 kubeconfig.go:92] found "multinode-955565" server: "https://192.168.58.2:8443"
	I0229 02:49:26.017712 1227792 api_server.go:166] Checking apiserver status ...
	I0229 02:49:26.017760 1227792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:49:26.028729 1227792 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1278/cgroup
	I0229 02:49:26.038058 1227792 api_server.go:182] apiserver freezer: "9:freezer:/docker/ffbd33600e4f017da4226c3342038eebe3eda5a6df7959dbe0edf743c0824b83/crio/crio-1a5e21922192688e178d22eddff1782fe7d37cc169892e94d6b8f538d1057845"
	I0229 02:49:26.038128 1227792 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ffbd33600e4f017da4226c3342038eebe3eda5a6df7959dbe0edf743c0824b83/crio/crio-1a5e21922192688e178d22eddff1782fe7d37cc169892e94d6b8f538d1057845/freezer.state
	I0229 02:49:26.047572 1227792 api_server.go:204] freezer state: "THAWED"
	I0229 02:49:26.047599 1227792 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0229 02:49:26.055968 1227792 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0229 02:49:26.056085 1227792 status.go:421] multinode-955565 apiserver status = Running (err=<nil>)
	I0229 02:49:26.056098 1227792 status.go:257] multinode-955565 status: &{Name:multinode-955565 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 02:49:26.056156 1227792 status.go:255] checking status of multinode-955565-m02 ...
	I0229 02:49:26.056481 1227792 cli_runner.go:164] Run: docker container inspect multinode-955565-m02 --format={{.State.Status}}
	I0229 02:49:26.072750 1227792 status.go:330] multinode-955565-m02 host status = "Running" (err=<nil>)
	I0229 02:49:26.072776 1227792 host.go:66] Checking if "multinode-955565-m02" exists ...
	I0229 02:49:26.073087 1227792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-955565-m02
	I0229 02:49:26.090390 1227792 host.go:66] Checking if "multinode-955565-m02" exists ...
	I0229 02:49:26.090708 1227792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 02:49:26.090759 1227792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-955565-m02
	I0229 02:49:26.106695 1227792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34117 SSHKeyPath:/home/jenkins/minikube-integration/18063-1148303/.minikube/machines/multinode-955565-m02/id_rsa Username:docker}
	I0229 02:49:26.201441 1227792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:49:26.212880 1227792 status.go:257] multinode-955565-m02 status: &{Name:multinode-955565-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0229 02:49:26.212916 1227792 status.go:255] checking status of multinode-955565-m03 ...
	I0229 02:49:26.213238 1227792 cli_runner.go:164] Run: docker container inspect multinode-955565-m03 --format={{.State.Status}}
	I0229 02:49:26.228473 1227792 status.go:330] multinode-955565-m03 host status = "Stopped" (err=<nil>)
	I0229 02:49:26.228498 1227792 status.go:343] host is not running, skipping remaining checks
	I0229 02:49:26.228505 1227792 status.go:257] multinode-955565-m03 status: &{Name:multinode-955565-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 node start m03 --alsologtostderr
E0229 02:49:26.408684 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-955565 node start m03 --alsologtostderr: (10.956607072s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (119.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-955565
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-955565
E0229 02:49:59.556831 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-955565: (24.808546932s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-955565 --wait=true -v=8 --alsologtostderr
E0229 02:51:01.529712 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-955565 --wait=true -v=8 --alsologtostderr: (1m34.163213393s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-955565
--- PASS: TestMultiNode/serial/RestartKeepsNodes (119.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-955565 node delete m03: (4.278059297s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-955565 stop: (23.685360035s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-955565 status: exit status 7 (98.239929ms)

                                                
                                                
-- stdout --
	multinode-955565
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-955565-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-955565 status --alsologtostderr: exit status 7 (104.694983ms)

                                                
                                                
-- stdout --
	multinode-955565
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-955565-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 02:52:05.903316 1235855 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:52:05.903589 1235855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:52:05.903617 1235855 out.go:304] Setting ErrFile to fd 2...
	I0229 02:52:05.903637 1235855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:52:05.903906 1235855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
	I0229 02:52:05.904165 1235855 out.go:298] Setting JSON to false
	I0229 02:52:05.904239 1235855 mustload.go:65] Loading cluster: multinode-955565
	I0229 02:52:05.904352 1235855 notify.go:220] Checking for updates...
	I0229 02:52:05.904729 1235855 config.go:182] Loaded profile config "multinode-955565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:52:05.904766 1235855 status.go:255] checking status of multinode-955565 ...
	I0229 02:52:05.905614 1235855 cli_runner.go:164] Run: docker container inspect multinode-955565 --format={{.State.Status}}
	I0229 02:52:05.925982 1235855 status.go:330] multinode-955565 host status = "Stopped" (err=<nil>)
	I0229 02:52:05.926001 1235855 status.go:343] host is not running, skipping remaining checks
	I0229 02:52:05.926008 1235855 status.go:257] multinode-955565 status: &{Name:multinode-955565 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 02:52:05.926042 1235855 status.go:255] checking status of multinode-955565-m02 ...
	I0229 02:52:05.926367 1235855 cli_runner.go:164] Run: docker container inspect multinode-955565-m02 --format={{.State.Status}}
	I0229 02:52:05.947241 1235855 status.go:330] multinode-955565-m02 host status = "Stopped" (err=<nil>)
	I0229 02:52:05.947261 1235855 status.go:343] host is not running, skipping remaining checks
	I0229 02:52:05.947269 1235855 status.go:257] multinode-955565-m02 status: &{Name:multinode-955565-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (81.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-955565 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-955565 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m20.575505382s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-955565 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (81.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-955565
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-955565-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-955565-m02 --driver=docker  --container-runtime=crio: exit status 14 (97.723621ms)

                                                
                                                
-- stdout --
	* [multinode-955565-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-955565-m02' is duplicated with machine name 'multinode-955565-m02' in profile 'multinode-955565'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-955565-m03 --driver=docker  --container-runtime=crio
E0229 02:53:36.508156 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-955565-m03 --driver=docker  --container-runtime=crio: (30.828795832s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-955565
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-955565: exit status 80 (329.065081ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-955565
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-955565-m03 already exists in multinode-955565-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-955565-m03
E0229 02:53:58.722446 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-955565-m03: (2.256713865s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.58s)

                                                
                                    
x
+
TestPreload (168.54s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-077641 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-077641 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m24.431181603s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-077641 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-077641 image pull gcr.io/k8s-minikube/busybox: (1.771966186s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-077641
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-077641: (5.850818416s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-077641 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0229 02:56:01.529710 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-077641 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m13.934737741s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-077641 image list
helpers_test.go:175: Cleaning up "test-preload-077641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-077641
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-077641: (2.301647852s)
--- PASS: TestPreload (168.54s)

                                                
                                    
x
+
TestScheduledStopUnix (110.03s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-715680 --memory=2048 --driver=docker  --container-runtime=crio
E0229 02:57:24.574039 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-715680 --memory=2048 --driver=docker  --container-runtime=crio: (33.02655831s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-715680 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-715680 -n scheduled-stop-715680
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-715680 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-715680 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-715680 -n scheduled-stop-715680
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-715680
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-715680 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0229 02:58:36.508700 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-715680
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-715680: exit status 7 (77.518503ms)

                                                
                                                
-- stdout --
	scheduled-stop-715680
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-715680 -n scheduled-stop-715680
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-715680 -n scheduled-stop-715680: exit status 7 (84.96815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-715680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-715680
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-715680: (5.304304077s)
--- PASS: TestScheduledStopUnix (110.03s)

                                                
                                    
x
+
TestInsufficientStorage (11.02s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-010582 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-010582 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.530186439s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5b6c8296-fad0-46ac-9145-eca28f820180","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-010582] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"37a4b07a-ce1e-4a86-814b-e5b18d6152ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18063"}}
	{"specversion":"1.0","id":"6f58e5bc-e7ba-4039-b9f5-8d8d3c85b62b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6ca307e2-45db-4bcd-a4f8-563d3cf49629","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig"}}
	{"specversion":"1.0","id":"d5d11075-c595-4f4e-885c-8527847512b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube"}}
	{"specversion":"1.0","id":"d70e1ca1-a89c-433b-a1e3-612df64f8ba1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"94017825-f379-48ec-bfa6-bec8a135514d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1b8caafd-cdf9-48b1-ac17-e88948177918","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"370fc957-08dd-4c0f-8555-5c14383a89c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6a289e1e-714a-4611-96d5-b7fbe668d12d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d183998-cd44-4d04-bf09-6f57a980962d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"850da72b-5c7c-4b3b-8e1d-d053f067b25d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-010582 in cluster insufficient-storage-010582","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"642a350a-88cf-4613-aacd-9be928376ce2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708944392-18244 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ab70811f-5131-46e7-8b4b-35e9c83fb514","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c10ab4ae-3743-4a5f-9a0a-ab6a712d97a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-010582 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-010582 --output=json --layout=cluster: exit status 7 (278.775288ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-010582","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-010582","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:58:52.107613 1252329 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-010582" does not appear in /home/jenkins/minikube-integration/18063-1148303/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-010582 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-010582 --output=json --layout=cluster: exit status 7 (303.413616ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-010582","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-010582","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:58:52.414585 1252382 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-010582" does not appear in /home/jenkins/minikube-integration/18063-1148303/kubeconfig
	E0229 02:58:52.424751 1252382 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/insufficient-storage-010582/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-010582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-010582
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-010582: (1.909527101s)
--- PASS: TestInsufficientStorage (11.02s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (76.81s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3799624476 start -p running-upgrade-916956 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3799624476 start -p running-upgrade-916956 --memory=2200 --vm-driver=docker  --container-runtime=crio: (45.89876193s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-916956 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0229 03:03:58.722331 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-916956 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.767633712s)
helpers_test.go:175: Cleaning up "running-upgrade-916956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-916956
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-916956: (2.806996486s)
--- PASS: TestRunningBinaryUpgrade (76.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (139.02s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-599502 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0229 03:01:01.529730 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-599502 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m6.684867506s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-599502
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-599502: (1.338383775s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-599502 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-599502 status --format={{.Host}}: exit status 7 (72.758599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-599502 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-599502 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.688177514s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-599502 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-599502 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-599502 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (138.33753ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-599502] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-599502
	    minikube start -p kubernetes-upgrade-599502 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5995022 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-599502 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-599502 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-599502 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.364017543s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-599502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-599502
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-599502: (2.556574609s)
--- PASS: TestKubernetesUpgrade (139.02s)

                                                
                                    
x
+
TestMissingContainerUpgrade (163.28s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2906072631 start -p missing-upgrade-130311 --memory=2200 --driver=docker  --container-runtime=crio
E0229 03:00:21.769084 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2906072631 start -p missing-upgrade-130311 --memory=2200 --driver=docker  --container-runtime=crio: (1m21.393568528s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-130311
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-130311: (10.414584191s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-130311
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-130311 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-130311 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m6.897220599s)
helpers_test.go:175: Cleaning up "missing-upgrade-130311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-130311
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-130311: (1.978189734s)
--- PASS: TestMissingContainerUpgrade (163.28s)

                                                
                                    
x
+
TestPause/serial/Start (58.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-027195 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-027195 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (58.963239145s)
--- PASS: TestPause/serial/Start (58.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-920309 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-920309 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (104.928691ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-920309] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-920309 --driver=docker  --container-runtime=crio
E0229 02:58:58.722976 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-920309 --driver=docker  --container-runtime=crio: (41.816185018s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-920309 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-920309 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-920309 --no-kubernetes --driver=docker  --container-runtime=crio: (4.390918636s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-920309 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-920309 status -o json: exit status 2 (378.695576ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-920309","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-920309
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-920309: (1.984227767s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-920309 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-920309 --no-kubernetes --driver=docker  --container-runtime=crio: (6.538079861s)
--- PASS: TestNoKubernetes/serial/Start (6.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-920309 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-920309 "sudo systemctl is-active --quiet service kubelet": exit status 1 (308.064969ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-920309
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-920309: (1.218300613s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-920309 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-920309 --driver=docker  --container-runtime=crio: (6.978404987s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.98s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (47.26s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-027195 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-027195 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.226152957s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (47.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-920309 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-920309 "sudo systemctl is-active --quiet service kubelet": exit status 1 (265.591319ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/Pause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-027195 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-027195 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-027195 --output=json --layout=cluster: exit status 2 (327.285166ms)

                                                
                                                
-- stdout --
	{"Name":"pause-027195","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-027195","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-027195 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.81s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-027195 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-027195 --alsologtostderr -v=5: (1.000918797s)
--- PASS: TestPause/serial/PauseAgain (1.00s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.32s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-027195 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-027195 --alsologtostderr -v=5: (4.323264094s)
--- PASS: TestPause/serial/DeletePaused (4.32s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-027195
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-027195: exit status 1 (24.452835ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-027195: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (81.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1397005721 start -p stopped-upgrade-517985 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1397005721 start -p stopped-upgrade-517985 --memory=2200 --vm-driver=docker  --container-runtime=crio: (42.603187823s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1397005721 -p stopped-upgrade-517985 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1397005721 -p stopped-upgrade-517985 stop: (2.684280205s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-517985 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0229 03:03:36.506587 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-517985 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.300430349s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (81.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-517985
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-517985: (2.088168804s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-969375 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-969375 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (269.30888ms)

                                                
                                                
-- stdout --
	* [false-969375] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 03:04:28.836991 1284252 out.go:291] Setting OutFile to fd 1 ...
	I0229 03:04:28.837178 1284252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 03:04:28.837191 1284252 out.go:304] Setting ErrFile to fd 2...
	I0229 03:04:28.837197 1284252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 03:04:28.837494 1284252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-1148303/.minikube/bin
	I0229 03:04:28.837911 1284252 out.go:298] Setting JSON to false
	I0229 03:04:28.838867 1284252 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24415,"bootTime":1709151454,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0229 03:04:28.838939 1284252 start.go:139] virtualization:  
	I0229 03:04:28.842308 1284252 out.go:177] * [false-969375] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0229 03:04:28.844974 1284252 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 03:04:28.847234 1284252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 03:04:28.845113 1284252 notify.go:220] Checking for updates...
	I0229 03:04:28.851581 1284252 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-1148303/kubeconfig
	I0229 03:04:28.856674 1284252 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-1148303/.minikube
	I0229 03:04:28.858797 1284252 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0229 03:04:28.861210 1284252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 03:04:28.864555 1284252 config.go:182] Loaded profile config "force-systemd-flag-554808": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 03:04:28.864663 1284252 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 03:04:28.909684 1284252 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0229 03:04:28.909805 1284252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 03:04:29.017422 1284252 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-29 03:04:29.00306157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.6]] Warnings:<nil>}}
	I0229 03:04:29.017527 1284252 docker.go:295] overlay module found
	I0229 03:04:29.021612 1284252 out.go:177] * Using the docker driver based on user configuration
	I0229 03:04:29.024034 1284252 start.go:299] selected driver: docker
	I0229 03:04:29.024055 1284252 start.go:903] validating driver "docker" against <nil>
	I0229 03:04:29.024070 1284252 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 03:04:29.026882 1284252 out.go:177] 
	W0229 03:04:29.029107 1284252 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0229 03:04:29.030892 1284252 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-969375 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-969375

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-969375

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-969375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-969375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-969375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-969375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-969375

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-969375

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-969375

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-969375

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-969375

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-969375" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-969375" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-969375

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-969375"

                                                
                                                
----------------------- debugLogs end: false-969375 [took: 4.600319015s] --------------------------------
helpers_test.go:175: Cleaning up "false-969375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-969375
--- PASS: TestNetworkPlugins/group/false (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (135.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-152228 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0229 03:06:39.557071 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-152228 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m15.833982766s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (135.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-152228 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f4878177-de3a-4476-81cf-8c37acde442d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f4878177-de3a-4476-81cf-8c37acde442d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003367859s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-152228 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-152228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-152228 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-152228 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-152228 --alsologtostderr -v=3: (12.159797404s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-152228 -n old-k8s-version-152228
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-152228 -n old-k8s-version-152228: exit status 7 (85.211678ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-152228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (451.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-152228 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0229 03:08:58.722267 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-152228 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m31.371824767s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-152228 -n old-k8s-version-152228
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (451.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-010339 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-010339 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (53.39908959s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-010339 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [02fad50a-0fd7-422f-b3cd-7bf616132731] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [02fad50a-0fd7-422f-b3cd-7bf616132731] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004395743s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-010339 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-010339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-010339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.022281096s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-010339 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-010339 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-010339 --alsologtostderr -v=3: (11.940445105s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-010339 -n default-k8s-diff-port-010339
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-010339 -n default-k8s-diff-port-010339: exit status 7 (76.968865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-010339 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (620.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-010339 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0229 03:11:01.529749 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 03:13:36.506314 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
E0229 03:13:58.722853 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
E0229 03:14:04.574522 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 03:16:01.529618 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-010339 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m19.870906534s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-010339 -n default-k8s-diff-port-010339
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (620.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gzfmj" [02ab2de6-6223-46ae-82b1-0a08f9662340] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003777932s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gzfmj" [02ab2de6-6223-46ae-82b1-0a08f9662340] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003256458s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-152228 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-152228 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-152228 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-152228 -n old-k8s-version-152228
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-152228 -n old-k8s-version-152228: exit status 2 (328.404365ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-152228 -n old-k8s-version-152228
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-152228 -n old-k8s-version-152228: exit status 2 (325.489761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-152228 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-152228 -n old-k8s-version-152228
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-152228 -n old-k8s-version-152228
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-756170 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0229 03:17:01.769504 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-756170 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m17.226199971s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-756170 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [787ba1e1-ce1c-443a-b0c9-8f9971b8babf] Pending
helpers_test.go:344: "busybox" [787ba1e1-ce1c-443a-b0c9-8f9971b8babf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [787ba1e1-ce1c-443a-b0c9-8f9971b8babf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003478278s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-756170 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-756170 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-756170 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.040697147s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-756170 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-756170 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-756170 --alsologtostderr -v=3: (12.171727874s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-756170 -n embed-certs-756170
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-756170 -n embed-certs-756170: exit status 7 (80.392622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-756170 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (344.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-756170 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0229 03:18:19.263180 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:18:19.268481 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:18:19.279124 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:18:19.299457 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:18:19.340309 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:18:19.420901 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:18:19.581471 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:18:19.901883 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:18:20.542585 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:18:21.823347 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:18:24.384160 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:18:29.505361 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:18:36.506556 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
E0229 03:18:39.745790 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:18:58.722933 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
E0229 03:19:00.230068 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:19:41.190706 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-756170 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m44.341807038s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-756170 -n embed-certs-756170
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (344.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rc9dx" [6fdb8c10-68ba-439e-b478-ab7c7523cb0d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004208853s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rc9dx" [6fdb8c10-68ba-439e-b478-ab7c7523cb0d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004252988s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-010339 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-010339 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-010339 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-010339 -n default-k8s-diff-port-010339
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-010339 -n default-k8s-diff-port-010339: exit status 2 (342.083247ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-010339 -n default-k8s-diff-port-010339
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-010339 -n default-k8s-diff-port-010339: exit status 2 (305.631762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-010339 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-010339 -n default-k8s-diff-port-010339
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-010339 -n default-k8s-diff-port-010339
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-288572 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0229 03:21:01.530101 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 03:21:03.111509 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-288572 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m5.904316409s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-288572 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1b001990-9ca6-4a4c-a2ac-1a3dcced324c] Pending
helpers_test.go:344: "busybox" [1b001990-9ca6-4a4c-a2ac-1a3dcced324c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1b001990-9ca6-4a4c-a2ac-1a3dcced324c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004411809s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-288572 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-288572 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-288572 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.011533612s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-288572 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-288572 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-288572 --alsologtostderr -v=3: (11.956537973s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-288572 -n no-preload-288572
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-288572 -n no-preload-288572: exit status 7 (84.124182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-288572 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (363.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-288572 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0229 03:23:19.262891 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:23:19.558068 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
E0229 03:23:36.506533 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
E0229 03:23:46.951673 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-288572 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (6m2.422379521s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-288572 -n no-preload-288572
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (363.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9jfzw" [d19ffe36-7c70-4fcc-b094-6594bd650d07] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0229 03:23:58.723060 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9jfzw" [d19ffe36-7c70-4fcc-b094-6594bd650d07] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004067254s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9jfzw" [d19ffe36-7c70-4fcc-b094-6594bd650d07] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003685522s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-756170 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-756170 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-756170 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-756170 -n embed-certs-756170
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-756170 -n embed-certs-756170: exit status 2 (349.533895ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-756170 -n embed-certs-756170
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-756170 -n embed-certs-756170: exit status 2 (339.738838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-756170 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-756170 -n embed-certs-756170
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-756170 -n embed-certs-756170
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-235444 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0229 03:24:57.447563 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
E0229 03:24:57.452868 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
E0229 03:24:57.463080 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
E0229 03:24:57.483484 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
E0229 03:24:57.523746 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
E0229 03:24:57.604072 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
E0229 03:24:57.764430 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
E0229 03:24:58.084621 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
E0229 03:24:58.725128 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
E0229 03:25:00.015271 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
E0229 03:25:02.576386 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-235444 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (44.044610708s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-235444 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-235444 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.156647276s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-235444 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-235444 --alsologtostderr -v=3: (1.295046267s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-235444 -n newest-cni-235444
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-235444 -n newest-cni-235444: exit status 7 (83.982932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-235444 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-235444 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0229 03:25:07.697432 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
E0229 03:25:17.937633 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-235444 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (30.987700396s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-235444 -n newest-cni-235444
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-235444 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-235444 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-235444 -n newest-cni-235444
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-235444 -n newest-cni-235444: exit status 2 (326.784953ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-235444 -n newest-cni-235444
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-235444 -n newest-cni-235444: exit status 2 (308.599079ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-235444 --alsologtostderr -v=1
E0229 03:25:38.418492 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-235444 -n newest-cni-235444
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-235444 -n newest-cni-235444
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (49.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-969375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0229 03:26:01.529696 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 03:26:19.379245 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-969375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (49.725546112s)
--- PASS: TestNetworkPlugins/group/auto/Start (49.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-969375 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-969375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mbn82" [1e4a19c9-bc08-488f-b98c-3681e7a50f9d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mbn82" [1e4a19c9-bc08-488f-b98c-3681e7a50f9d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004317322s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-969375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-969375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-969375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (50.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-969375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0229 03:27:41.299505 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-969375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (50.114456858s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (50.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mrdrh" [d97c18b2-91ae-4e78-8328-8ecdc962ddcd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.01805516s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-969375 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-969375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7pwbq" [99672364-9195-4869-bafa-28e72debfacc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7pwbq" [99672364-9195-4869-bafa-28e72debfacc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003769959s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-969375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-969375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-969375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rsrkk" [bc434bf6-6576-4e74-85ed-574d8cc04ed1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003886229s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rsrkk" [bc434bf6-6576-4e74-85ed-574d8cc04ed1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00439305s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-288572 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (86.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-969375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0229 03:28:36.506865 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-969375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m26.708586947s)
--- PASS: TestNetworkPlugins/group/calico/Start (86.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-288572 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-288572 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-288572 --alsologtostderr -v=1: (1.098783006s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-288572 -n no-preload-288572
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-288572 -n no-preload-288572: exit status 2 (388.768443ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-288572 -n no-preload-288572
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-288572 -n no-preload-288572: exit status 2 (353.989258ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-288572 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-288572 -n no-preload-288572
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-288572 -n no-preload-288572
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.23s)
E0229 03:32:54.057380 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/kindnet-969375/client.crt: no such file or directory
E0229 03:32:54.062671 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/kindnet-969375/client.crt: no such file or directory
E0229 03:32:54.072916 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/kindnet-969375/client.crt: no such file or directory
E0229 03:32:54.093164 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/kindnet-969375/client.crt: no such file or directory
E0229 03:32:54.133409 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/kindnet-969375/client.crt: no such file or directory
E0229 03:32:54.213657 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/kindnet-969375/client.crt: no such file or directory
E0229 03:32:54.374011 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/kindnet-969375/client.crt: no such file or directory
E0229 03:32:54.383237 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/auto-969375/client.crt: no such file or directory
E0229 03:32:54.694785 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/kindnet-969375/client.crt: no such file or directory
E0229 03:32:55.335764 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/kindnet-969375/client.crt: no such file or directory
E0229 03:32:56.616114 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/kindnet-969375/client.crt: no such file or directory
E0229 03:32:59.176315 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/kindnet-969375/client.crt: no such file or directory
E0229 03:33:04.296508 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/kindnet-969375/client.crt: no such file or directory
E0229 03:33:14.536918 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/kindnet-969375/client.crt: no such file or directory
E0229 03:33:19.262831 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/old-k8s-version-152228/client.crt: no such file or directory
E0229 03:33:25.789258 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/no-preload-288572/client.crt: no such file or directory
E0229 03:33:35.017864 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/kindnet-969375/client.crt: no such file or directory
E0229 03:33:36.506552 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/addons-847636/client.crt: no such file or directory
E0229 03:33:41.770579 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
E0229 03:33:58.722220 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-969375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0229 03:28:58.723081 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/ingress-addon-legacy-080946/client.crt: no such file or directory
E0229 03:29:57.448273 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/default-k8s-diff-port-010339/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-969375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.919297143s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-969375 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-969375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5878t" [090aff69-cd96-43a2-889f-355aea74946e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5878t" [090aff69-cd96-43a2-889f-355aea74946e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004427339s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qqzlx" [5cdc1ebe-646a-450a-acc5-d90de6def12a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005697694s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-969375 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-969375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wjr2x" [36be7a87-4898-43e0-a11b-8d262eb76b3d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wjr2x" [36be7a87-4898-43e0-a11b-8d262eb76b3d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.021000188s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-969375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-969375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-969375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-969375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-969375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-969375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-969375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-969375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m32.485948373s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-969375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0229 03:31:01.529413 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/functional-552840/client.crt: no such file or directory
E0229 03:31:32.458223 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/auto-969375/client.crt: no such file or directory
E0229 03:31:32.463692 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/auto-969375/client.crt: no such file or directory
E0229 03:31:32.473958 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/auto-969375/client.crt: no such file or directory
E0229 03:31:32.494235 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/auto-969375/client.crt: no such file or directory
E0229 03:31:32.534626 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/auto-969375/client.crt: no such file or directory
E0229 03:31:32.614836 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/auto-969375/client.crt: no such file or directory
E0229 03:31:32.775533 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/auto-969375/client.crt: no such file or directory
E0229 03:31:33.096444 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/auto-969375/client.crt: no such file or directory
E0229 03:31:33.737340 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/auto-969375/client.crt: no such file or directory
E0229 03:31:35.018079 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/auto-969375/client.crt: no such file or directory
E0229 03:31:37.579227 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/auto-969375/client.crt: no such file or directory
E0229 03:31:42.701245 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/auto-969375/client.crt: no such file or directory
E0229 03:31:52.942154 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/auto-969375/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-969375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m10.140675369s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-svhm7" [ae1a1ecf-149f-474d-9ac9-d61d5a312710] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004579636s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-969375 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-969375 replace --force -f testdata/netcat-deployment.yaml
E0229 03:32:03.862877 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/no-preload-288572/client.crt: no such file or directory
E0229 03:32:03.868560 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/no-preload-288572/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0229 03:32:03.879497 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/no-preload-288572/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-t25pl" [d499d502-ce0b-4678-b70d-50b8bc5789bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0229 03:32:03.899745 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/no-preload-288572/client.crt: no such file or directory
E0229 03:32:03.940065 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/no-preload-288572/client.crt: no such file or directory
E0229 03:32:04.020446 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/no-preload-288572/client.crt: no such file or directory
E0229 03:32:04.180834 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/no-preload-288572/client.crt: no such file or directory
E0229 03:32:04.501570 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/no-preload-288572/client.crt: no such file or directory
E0229 03:32:05.142254 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/no-preload-288572/client.crt: no such file or directory
E0229 03:32:06.423141 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/no-preload-288572/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-t25pl" [d499d502-ce0b-4678-b70d-50b8bc5789bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00387391s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-969375 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-969375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-czwgs" [531d5e51-b525-4b9e-8145-bcbac3c04f7a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0229 03:32:08.984258 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/no-preload-288572/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-czwgs" [531d5e51-b525-4b9e-8145-bcbac3c04f7a] Running
E0229 03:32:13.423033 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/auto-969375/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003994836s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-969375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-969375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0229 03:32:14.104712 1153658 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-1148303/.minikube/profiles/no-preload-288572/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-969375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-969375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-969375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-969375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-969375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-969375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m23.209240081s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-969375 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-969375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z7wdc" [584e0e67-2371-4fda-9d6c-f2e3faac708a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z7wdc" [584e0e67-2371-4fda-9d6c-f2e3faac708a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003925603s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-969375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-969375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-969375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (32/320)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-591946 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-591946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-591946
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-647578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-647578
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-969375 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-969375

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-969375

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-969375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-969375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-969375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-969375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-969375

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-969375

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-969375

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-969375

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-969375

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-969375" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-969375" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-969375

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-969375"

                                                
                                                
----------------------- debugLogs end: kubenet-969375 [took: 4.394864403s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-969375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-969375
--- SKIP: TestNetworkPlugins/group/kubenet (4.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-969375 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-969375" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-969375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-969375" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-969375

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-969375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-969375"

                                                
                                                
----------------------- debugLogs end: cilium-969375 [took: 5.207113535s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-969375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-969375
--- SKIP: TestNetworkPlugins/group/cilium (5.42s)

                                                
                                    
Copied to clipboard