Test Report: Docker_Linux_crio_arm64 18222

                    
                      364dec8bbfa467ece5e4dc002f47e6311a48ec7e:2024-02-26:33307
                    
                

Test fail (3/314)

Order failed test Duration
39 TestAddons/parallel/Ingress 168.15
171 TestIngressAddonLegacy/serial/ValidateIngressAddons 179.85
269 TestPause/serial/SecondStartNoReconfiguration 122.74
x
+
TestAddons/parallel/Ingress (168.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-006797 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-006797 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-006797 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cde0af80-86b3-4164-adf5-af26c194dbcc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cde0af80-86b3-4164-adf5-af26c194dbcc] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00380195s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-006797 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-006797 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.941333419s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-006797 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-006797 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.068981659s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-006797 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-006797 addons disable ingress-dns --alsologtostderr -v=1: (1.380388432s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-006797 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-006797 addons disable ingress --alsologtostderr -v=1: (7.769941042s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-006797
helpers_test.go:235: (dbg) docker inspect addons-006797:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1afea931b3608a2fa2e3c409f2347dae511f091dd77389b20f56f160da8d955f",
	        "Created": "2024-02-26T11:45:10.687814794Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 615265,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:45:11.000532008Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83c2925cbfe44b87b5c0672f16807927d87d8625e89de4dc154c45daaaa04b5b",
	        "ResolvConfPath": "/var/lib/docker/containers/1afea931b3608a2fa2e3c409f2347dae511f091dd77389b20f56f160da8d955f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1afea931b3608a2fa2e3c409f2347dae511f091dd77389b20f56f160da8d955f/hostname",
	        "HostsPath": "/var/lib/docker/containers/1afea931b3608a2fa2e3c409f2347dae511f091dd77389b20f56f160da8d955f/hosts",
	        "LogPath": "/var/lib/docker/containers/1afea931b3608a2fa2e3c409f2347dae511f091dd77389b20f56f160da8d955f/1afea931b3608a2fa2e3c409f2347dae511f091dd77389b20f56f160da8d955f-json.log",
	        "Name": "/addons-006797",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-006797:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-006797",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ec3bbefcbb5361137df7c620642563c53c43da8787fe6cc7779ad2d5e8564767-init/diff:/var/lib/docker/overlay2/f0e0da57c811333114b7a0181d8121ec20f9baacbcf19d34fad5038b1792b1cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec3bbefcbb5361137df7c620642563c53c43da8787fe6cc7779ad2d5e8564767/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec3bbefcbb5361137df7c620642563c53c43da8787fe6cc7779ad2d5e8564767/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec3bbefcbb5361137df7c620642563c53c43da8787fe6cc7779ad2d5e8564767/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-006797",
	                "Source": "/var/lib/docker/volumes/addons-006797/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-006797",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-006797",
	                "name.minikube.sigs.k8s.io": "addons-006797",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e8fcdf890c59dde12b920f4f849b076860867c7be914a3f08ec59f8bbb54e27f",
	            "SandboxKey": "/var/run/docker/netns/e8fcdf890c59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36801"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36800"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36799"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36798"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-006797": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1afea931b360",
	                        "addons-006797"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "a8b58134490275a2248f9b1d13f47fd4f16824cbbfce40e69257724bfd46a468",
	                    "EndpointID": "e90774dd17f38caaa79cad59b5f25c9f516e9ab3c2f7385d370aa3c41830c969",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-006797",
	                        "1afea931b360"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-006797 -n addons-006797
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-006797 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-006797 logs -n 25: (1.511132507s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-744997                                                                     | download-only-744997   | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC | 26 Feb 24 11:44 UTC |
	| delete  | -p download-only-333231                                                                     | download-only-333231   | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC | 26 Feb 24 11:44 UTC |
	| delete  | -p download-only-735131                                                                     | download-only-735131   | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC | 26 Feb 24 11:44 UTC |
	| start   | --download-only -p                                                                          | download-docker-363091 | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC |                     |
	|         | download-docker-363091                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-363091                                                                   | download-docker-363091 | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC | 26 Feb 24 11:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-467666   | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC |                     |
	|         | binary-mirror-467666                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45723                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-467666                                                                     | binary-mirror-467666   | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC | 26 Feb 24 11:44 UTC |
	| addons  | enable dashboard -p                                                                         | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC |                     |
	|         | addons-006797                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC |                     |
	|         | addons-006797                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-006797 --wait=true                                                                | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC | 26 Feb 24 11:47 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-006797 ip                                                                            | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:47 UTC | 26 Feb 24 11:47 UTC |
	| addons  | addons-006797 addons disable                                                                | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:47 UTC | 26 Feb 24 11:47 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-006797 addons                                                                        | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:47 UTC | 26 Feb 24 11:47 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:48 UTC |
	|         | addons-006797                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-006797 ssh curl -s                                                                   | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:48 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-006797 addons                                                                        | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:48 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-006797 addons                                                                        | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:48 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:48 UTC | 26 Feb 24 11:48 UTC |
	|         | -p addons-006797                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-006797 ssh cat                                                                       | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:49 UTC | 26 Feb 24 11:49 UTC |
	|         | /opt/local-path-provisioner/pvc-fdf276d8-1831-47fb-9d00-f09a775c769f_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-006797 addons disable                                                                | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:49 UTC | 26 Feb 24 11:49 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:49 UTC | 26 Feb 24 11:49 UTC |
	|         | addons-006797                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:49 UTC | 26 Feb 24 11:49 UTC |
	|         | -p addons-006797                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-006797 ip                                                                            | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:50 UTC | 26 Feb 24 11:50 UTC |
	| addons  | addons-006797 addons disable                                                                | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:50 UTC | 26 Feb 24 11:50 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-006797 addons disable                                                                | addons-006797          | jenkins | v1.32.0 | 26 Feb 24 11:50 UTC | 26 Feb 24 11:50 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 11:44:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 11:44:46.940726  614803 out.go:291] Setting OutFile to fd 1 ...
	I0226 11:44:46.940908  614803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:44:46.940920  614803 out.go:304] Setting ErrFile to fd 2...
	I0226 11:44:46.940928  614803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:44:46.941200  614803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
	I0226 11:44:46.941680  614803 out.go:298] Setting JSON to false
	I0226 11:44:46.942535  614803 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":88033,"bootTime":1708859854,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0226 11:44:46.942605  614803 start.go:139] virtualization:  
	I0226 11:44:46.945543  614803 out.go:177] * [addons-006797] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0226 11:44:46.948348  614803 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 11:44:46.950070  614803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 11:44:46.948465  614803 notify.go:220] Checking for updates...
	I0226 11:44:46.954025  614803 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 11:44:46.955984  614803 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	I0226 11:44:46.957800  614803 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0226 11:44:46.959807  614803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 11:44:46.961729  614803 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 11:44:46.985031  614803 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0226 11:44:46.985150  614803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:44:47.055839  614803 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-26 11:44:47.045996312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 11:44:47.055946  614803 docker.go:295] overlay module found
	I0226 11:44:47.058536  614803 out.go:177] * Using the docker driver based on user configuration
	I0226 11:44:47.060386  614803 start.go:299] selected driver: docker
	I0226 11:44:47.060401  614803 start.go:903] validating driver "docker" against <nil>
	I0226 11:44:47.060417  614803 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 11:44:47.061289  614803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:44:47.113140  614803 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-26 11:44:47.104271818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 11:44:47.113305  614803 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 11:44:47.113533  614803 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0226 11:44:47.115528  614803 out.go:177] * Using Docker driver with root privileges
	I0226 11:44:47.117252  614803 cni.go:84] Creating CNI manager for ""
	I0226 11:44:47.117274  614803 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 11:44:47.117285  614803 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0226 11:44:47.117300  614803 start_flags.go:323] config:
	{Name:addons-006797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-006797 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:44:47.120026  614803 out.go:177] * Starting control plane node addons-006797 in cluster addons-006797
	I0226 11:44:47.121545  614803 cache.go:121] Beginning downloading kic base image for docker with crio
	I0226 11:44:47.123538  614803 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 11:44:47.125499  614803 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0226 11:44:47.125565  614803 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0226 11:44:47.125577  614803 cache.go:56] Caching tarball of preloaded images
	I0226 11:44:47.125580  614803 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 11:44:47.125662  614803 preload.go:174] Found /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0226 11:44:47.125672  614803 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0226 11:44:47.126026  614803 profile.go:148] Saving config to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/config.json ...
	I0226 11:44:47.126057  614803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/config.json: {Name:mk5a0c0dafc629eaec4b4d626b361560adb3824c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:44:47.140482  614803 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0226 11:44:47.140595  614803 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0226 11:44:47.140614  614803 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory, skipping pull
	I0226 11:44:47.140619  614803 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in cache, skipping pull
	I0226 11:44:47.140627  614803 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	I0226 11:44:47.140632  614803 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf from local cache
	I0226 11:45:03.274177  614803 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf from cached tarball
	I0226 11:45:03.274211  614803 cache.go:194] Successfully downloaded all kic artifacts
	I0226 11:45:03.274242  614803 start.go:365] acquiring machines lock for addons-006797: {Name:mk8abe1c714eb05224f6c45b2235d096705c4bd1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:45:03.275106  614803 start.go:369] acquired machines lock for "addons-006797" in 842.406µs
	I0226 11:45:03.275146  614803 start.go:93] Provisioning new machine with config: &{Name:addons-006797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-006797 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0226 11:45:03.275244  614803 start.go:125] createHost starting for "" (driver="docker")
	I0226 11:45:03.277552  614803 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0226 11:45:03.277827  614803 start.go:159] libmachine.API.Create for "addons-006797" (driver="docker")
	I0226 11:45:03.277866  614803 client.go:168] LocalClient.Create starting
	I0226 11:45:03.277999  614803 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem
	I0226 11:45:04.139606  614803 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem
	I0226 11:45:04.449513  614803 cli_runner.go:164] Run: docker network inspect addons-006797 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0226 11:45:04.467656  614803 cli_runner.go:211] docker network inspect addons-006797 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0226 11:45:04.467737  614803 network_create.go:281] running [docker network inspect addons-006797] to gather additional debugging logs...
	I0226 11:45:04.467758  614803 cli_runner.go:164] Run: docker network inspect addons-006797
	W0226 11:45:04.482461  614803 cli_runner.go:211] docker network inspect addons-006797 returned with exit code 1
	I0226 11:45:04.482503  614803 network_create.go:284] error running [docker network inspect addons-006797]: docker network inspect addons-006797: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-006797 not found
	I0226 11:45:04.482516  614803 network_create.go:286] output of [docker network inspect addons-006797]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-006797 not found
	
	** /stderr **
	I0226 11:45:04.482615  614803 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 11:45:04.498028  614803 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002555750}
	I0226 11:45:04.498073  614803 network_create.go:124] attempt to create docker network addons-006797 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0226 11:45:04.498137  614803 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-006797 addons-006797
	I0226 11:45:04.555977  614803 network_create.go:108] docker network addons-006797 192.168.49.0/24 created
	I0226 11:45:04.556013  614803 kic.go:121] calculated static IP "192.168.49.2" for the "addons-006797" container
	I0226 11:45:04.556102  614803 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0226 11:45:04.570645  614803 cli_runner.go:164] Run: docker volume create addons-006797 --label name.minikube.sigs.k8s.io=addons-006797 --label created_by.minikube.sigs.k8s.io=true
	I0226 11:45:04.586926  614803 oci.go:103] Successfully created a docker volume addons-006797
	I0226 11:45:04.587034  614803 cli_runner.go:164] Run: docker run --rm --name addons-006797-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006797 --entrypoint /usr/bin/test -v addons-006797:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0226 11:45:06.353780  614803 cli_runner.go:217] Completed: docker run --rm --name addons-006797-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006797 --entrypoint /usr/bin/test -v addons-006797:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib: (1.766705656s)
	I0226 11:45:06.353812  614803 oci.go:107] Successfully prepared a docker volume addons-006797
	I0226 11:45:06.353848  614803 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0226 11:45:06.353870  614803 kic.go:194] Starting extracting preloaded images to volume ...
	I0226 11:45:06.353969  614803 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-006797:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0226 11:45:10.615003  614803 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-006797:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (4.26099199s)
	I0226 11:45:10.615037  614803 kic.go:203] duration metric: took 4.261163 seconds to extract preloaded images to volume
	W0226 11:45:10.615196  614803 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0226 11:45:10.615321  614803 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0226 11:45:10.673550  614803 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-006797 --name addons-006797 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006797 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-006797 --network addons-006797 --ip 192.168.49.2 --volume addons-006797:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0226 11:45:11.018860  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Running}}
	I0226 11:45:11.040429  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:11.061917  614803 cli_runner.go:164] Run: docker exec addons-006797 stat /var/lib/dpkg/alternatives/iptables
	I0226 11:45:11.121510  614803 oci.go:144] the created container "addons-006797" has a running status.
	I0226 11:45:11.121544  614803 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa...
	I0226 11:45:11.340927  614803 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0226 11:45:11.362036  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:11.385102  614803 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0226 11:45:11.385131  614803 kic_runner.go:114] Args: [docker exec --privileged addons-006797 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0226 11:45:11.445263  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:11.469928  614803 machine.go:88] provisioning docker machine ...
	I0226 11:45:11.469963  614803 ubuntu.go:169] provisioning hostname "addons-006797"
	I0226 11:45:11.470039  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:11.492811  614803 main.go:141] libmachine: Using SSH client type: native
	I0226 11:45:11.493086  614803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 36801 <nil> <nil>}
	I0226 11:45:11.493098  614803 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-006797 && echo "addons-006797" | sudo tee /etc/hostname
	I0226 11:45:11.494027  614803 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43402->127.0.0.1:36801: read: connection reset by peer
	I0226 11:45:14.645020  614803 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006797
	
	I0226 11:45:14.645155  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:14.661279  614803 main.go:141] libmachine: Using SSH client type: native
	I0226 11:45:14.661538  614803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 36801 <nil> <nil>}
	I0226 11:45:14.661559  614803 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-006797' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-006797/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-006797' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 11:45:14.800660  614803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 11:45:14.800711  614803 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18222-608626/.minikube CaCertPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18222-608626/.minikube}
	I0226 11:45:14.800742  614803 ubuntu.go:177] setting up certificates
	I0226 11:45:14.800753  614803 provision.go:83] configureAuth start
	I0226 11:45:14.800813  614803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006797
	I0226 11:45:14.817248  614803 provision.go:138] copyHostCerts
	I0226 11:45:14.817336  614803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem (1082 bytes)
	I0226 11:45:14.817480  614803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem (1123 bytes)
	I0226 11:45:14.817564  614803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem (1679 bytes)
	I0226 11:45:14.817639  614803 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca-key.pem org=jenkins.addons-006797 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-006797]
	I0226 11:45:15.152771  614803 provision.go:172] copyRemoteCerts
	I0226 11:45:15.152844  614803 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 11:45:15.152892  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:15.169385  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:15.273423  614803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0226 11:45:15.297512  614803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0226 11:45:15.321380  614803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0226 11:45:15.346479  614803 provision.go:86] duration metric: configureAuth took 545.711803ms
	I0226 11:45:15.346512  614803 ubuntu.go:193] setting minikube options for container-runtime
	I0226 11:45:15.346710  614803 config.go:182] Loaded profile config "addons-006797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 11:45:15.346832  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:15.364246  614803 main.go:141] libmachine: Using SSH client type: native
	I0226 11:45:15.364497  614803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 36801 <nil> <nil>}
	I0226 11:45:15.364518  614803 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0226 11:45:15.609638  614803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0226 11:45:15.609729  614803 machine.go:91] provisioned docker machine in 4.139764642s
	I0226 11:45:15.609754  614803 client.go:171] LocalClient.Create took 12.33187938s
	I0226 11:45:15.609793  614803 start.go:167] duration metric: libmachine.API.Create for "addons-006797" took 12.331966262s
	I0226 11:45:15.609821  614803 start.go:300] post-start starting for "addons-006797" (driver="docker")
	I0226 11:45:15.609846  614803 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 11:45:15.609937  614803 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 11:45:15.610004  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:15.626617  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:15.726187  614803 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 11:45:15.729571  614803 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 11:45:15.729610  614803 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 11:45:15.729626  614803 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 11:45:15.729639  614803 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 11:45:15.729654  614803 filesync.go:126] Scanning /home/jenkins/minikube-integration/18222-608626/.minikube/addons for local assets ...
	I0226 11:45:15.729734  614803 filesync.go:126] Scanning /home/jenkins/minikube-integration/18222-608626/.minikube/files for local assets ...
	I0226 11:45:15.729765  614803 start.go:303] post-start completed in 119.924042ms
	I0226 11:45:15.730099  614803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006797
	I0226 11:45:15.745999  614803 profile.go:148] Saving config to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/config.json ...
	I0226 11:45:15.746301  614803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 11:45:15.746354  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:15.762164  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:15.857698  614803 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 11:45:15.862237  614803 start.go:128] duration metric: createHost completed in 12.586976029s
	I0226 11:45:15.862268  614803 start.go:83] releasing machines lock for "addons-006797", held for 12.58714317s
	I0226 11:45:15.862360  614803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006797
	I0226 11:45:15.877842  614803 ssh_runner.go:195] Run: cat /version.json
	I0226 11:45:15.877897  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:15.877923  614803 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 11:45:15.877986  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:15.894585  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:15.903382  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:15.988125  614803 ssh_runner.go:195] Run: systemctl --version
	I0226 11:45:16.130358  614803 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0226 11:45:16.273929  614803 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 11:45:16.278210  614803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0226 11:45:16.298512  614803 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0226 11:45:16.298596  614803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0226 11:45:16.329693  614803 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0226 11:45:16.329765  614803 start.go:475] detecting cgroup driver to use...
	I0226 11:45:16.329816  614803 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 11:45:16.329935  614803 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0226 11:45:16.347153  614803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0226 11:45:16.359094  614803 docker.go:217] disabling cri-docker service (if available) ...
	I0226 11:45:16.359233  614803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0226 11:45:16.373776  614803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0226 11:45:16.388730  614803 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0226 11:45:16.478369  614803 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0226 11:45:16.578847  614803 docker.go:233] disabling docker service ...
	I0226 11:45:16.578969  614803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0226 11:45:16.598794  614803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0226 11:45:16.610801  614803 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0226 11:45:16.703603  614803 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0226 11:45:16.801063  614803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0226 11:45:16.812828  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 11:45:16.829233  614803 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0226 11:45:16.829313  614803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 11:45:16.839305  614803 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0226 11:45:16.839379  614803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 11:45:16.849642  614803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 11:45:16.859252  614803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 11:45:16.868911  614803 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 11:45:16.878177  614803 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 11:45:16.886765  614803 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 11:45:16.895080  614803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:45:16.979536  614803 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0226 11:45:17.113529  614803 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0226 11:45:17.113684  614803 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0226 11:45:17.117410  614803 start.go:543] Will wait 60s for crictl version
	I0226 11:45:17.117473  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:45:17.121195  614803 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0226 11:45:17.161043  614803 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0226 11:45:17.161176  614803 ssh_runner.go:195] Run: crio --version
	I0226 11:45:17.199236  614803 ssh_runner.go:195] Run: crio --version
	I0226 11:45:17.240085  614803 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0226 11:45:17.242065  614803 cli_runner.go:164] Run: docker network inspect addons-006797 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 11:45:17.258096  614803 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0226 11:45:17.262134  614803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 11:45:17.272880  614803 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0226 11:45:17.272962  614803 ssh_runner.go:195] Run: sudo crictl images --output json
	I0226 11:45:17.339502  614803 crio.go:496] all images are preloaded for cri-o runtime.
	I0226 11:45:17.339527  614803 crio.go:415] Images already preloaded, skipping extraction
	I0226 11:45:17.339599  614803 ssh_runner.go:195] Run: sudo crictl images --output json
	I0226 11:45:17.375213  614803 crio.go:496] all images are preloaded for cri-o runtime.
	I0226 11:45:17.375235  614803 cache_images.go:84] Images are preloaded, skipping loading
	I0226 11:45:17.375308  614803 ssh_runner.go:195] Run: crio config
	I0226 11:45:17.422731  614803 cni.go:84] Creating CNI manager for ""
	I0226 11:45:17.422755  614803 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 11:45:17.422776  614803 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 11:45:17.422796  614803 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-006797 NodeName:addons-006797 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0226 11:45:17.422946  614803 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-006797"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 11:45:17.423053  614803 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-006797 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-006797 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 11:45:17.423130  614803 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0226 11:45:17.432104  614803 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 11:45:17.432179  614803 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 11:45:17.440993  614803 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0226 11:45:17.459112  614803 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 11:45:17.477361  614803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0226 11:45:17.495388  614803 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0226 11:45:17.498721  614803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 11:45:17.509330  614803 certs.go:56] Setting up /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797 for IP: 192.168.49.2
	I0226 11:45:17.509366  614803 certs.go:190] acquiring lock for shared ca certs: {Name:mkc71f6ba94614715b3b8dc8b06b5f59e5f1adfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:45:17.509549  614803 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.key
	I0226 11:45:18.085758  614803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt ...
	I0226 11:45:18.085801  614803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt: {Name:mkf48824bcbe00faef1c5e233c3907181d8b6e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:45:18.086014  614803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18222-608626/.minikube/ca.key ...
	I0226 11:45:18.086030  614803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/ca.key: {Name:mkeefabfbf0bceb8a4eef40630a23740fcc30238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:45:18.086124  614803 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.key
	I0226 11:45:18.779504  614803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.crt ...
	I0226 11:45:18.779536  614803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.crt: {Name:mkd07d1fa558d12ca8e995d2e8820d73a3b8f68e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:45:18.779736  614803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.key ...
	I0226 11:45:18.779748  614803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.key: {Name:mk427adee9ab21b239fdb768d67a83971ab26e9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:45:18.780567  614803 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.key
	I0226 11:45:18.780590  614803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt with IP's: []
	I0226 11:45:19.702050  614803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt ...
	I0226 11:45:19.702082  614803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: {Name:mke7b51401c278f2e53e7bcee0f0e557ab72dab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:45:19.702801  614803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.key ...
	I0226 11:45:19.702817  614803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.key: {Name:mk632d2240f74cbd7a4f74ab4fddcc8e898ee62f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:45:19.702921  614803 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/apiserver.key.dd3b5fb2
	I0226 11:45:19.702940  614803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0226 11:45:20.034433  614803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/apiserver.crt.dd3b5fb2 ...
	I0226 11:45:20.034467  614803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/apiserver.crt.dd3b5fb2: {Name:mkd4c25be6d4d2f4936820a8545f55aba1d62f41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:45:20.034666  614803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/apiserver.key.dd3b5fb2 ...
	I0226 11:45:20.034683  614803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/apiserver.key.dd3b5fb2: {Name:mkb0a93ee9c57191b60af825870f655c6a8b1aa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:45:20.034768  614803 certs.go:337] copying /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/apiserver.crt
	I0226 11:45:20.034875  614803 certs.go:341] copying /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/apiserver.key
	I0226 11:45:20.034936  614803 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/proxy-client.key
	I0226 11:45:20.034959  614803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/proxy-client.crt with IP's: []
	I0226 11:45:20.760777  614803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/proxy-client.crt ...
	I0226 11:45:20.760808  614803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/proxy-client.crt: {Name:mke6ccd22944a92265f7c7734eee247d6023ad8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:45:20.761520  614803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/proxy-client.key ...
	I0226 11:45:20.761537  614803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/proxy-client.key: {Name:mka69badd0d2a704def94a42b809ba4c4a195525 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:45:20.761741  614803 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca-key.pem (1675 bytes)
	I0226 11:45:20.761785  614803 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem (1082 bytes)
	I0226 11:45:20.761818  614803 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem (1123 bytes)
	I0226 11:45:20.761848  614803 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem (1679 bytes)
	I0226 11:45:20.762450  614803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 11:45:20.786596  614803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0226 11:45:20.811002  614803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 11:45:20.833540  614803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0226 11:45:20.856829  614803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 11:45:20.880887  614803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0226 11:45:20.904080  614803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 11:45:20.927751  614803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0226 11:45:20.950998  614803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 11:45:20.974190  614803 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 11:45:20.991580  614803 ssh_runner.go:195] Run: openssl version
	I0226 11:45:20.996796  614803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 11:45:21.007083  614803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:45:21.011188  614803 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 11:45 /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:45:21.011310  614803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:45:21.018855  614803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 11:45:21.028873  614803 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 11:45:21.032304  614803 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0226 11:45:21.032375  614803 kubeadm.go:404] StartCluster: {Name:addons-006797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-006797 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:45:21.032469  614803 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0226 11:45:21.032543  614803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0226 11:45:21.075066  614803 cri.go:89] found id: ""
	I0226 11:45:21.075161  614803 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 11:45:21.084063  614803 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 11:45:21.092834  614803 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 11:45:21.092928  614803 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 11:45:21.101977  614803 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 11:45:21.102030  614803 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 11:45:21.194883  614803 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0226 11:45:21.268316  614803 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 11:45:38.104242  614803 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0226 11:45:38.104302  614803 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 11:45:38.104387  614803 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0226 11:45:38.104440  614803 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0226 11:45:38.104473  614803 kubeadm.go:322] OS: Linux
	I0226 11:45:38.104517  614803 kubeadm.go:322] CGROUPS_CPU: enabled
	I0226 11:45:38.104563  614803 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0226 11:45:38.104612  614803 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0226 11:45:38.104658  614803 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0226 11:45:38.104721  614803 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0226 11:45:38.104772  614803 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0226 11:45:38.104816  614803 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0226 11:45:38.104868  614803 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0226 11:45:38.104917  614803 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0226 11:45:38.104986  614803 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 11:45:38.105078  614803 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 11:45:38.105173  614803 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 11:45:38.105234  614803 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 11:45:38.107073  614803 out.go:204]   - Generating certificates and keys ...
	I0226 11:45:38.107171  614803 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 11:45:38.107235  614803 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 11:45:38.107299  614803 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0226 11:45:38.107353  614803 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0226 11:45:38.107410  614803 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0226 11:45:38.107457  614803 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0226 11:45:38.107511  614803 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0226 11:45:38.107631  614803 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-006797 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0226 11:45:38.107682  614803 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0226 11:45:38.107791  614803 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-006797 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0226 11:45:38.107853  614803 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0226 11:45:38.107913  614803 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0226 11:45:38.107955  614803 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0226 11:45:38.108008  614803 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 11:45:38.108056  614803 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 11:45:38.108107  614803 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 11:45:38.108181  614803 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 11:45:38.108234  614803 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 11:45:38.108311  614803 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 11:45:38.108373  614803 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 11:45:38.110438  614803 out.go:204]   - Booting up control plane ...
	I0226 11:45:38.110618  614803 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 11:45:38.110715  614803 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 11:45:38.110839  614803 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 11:45:38.110995  614803 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 11:45:38.111097  614803 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 11:45:38.111142  614803 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0226 11:45:38.111305  614803 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 11:45:38.111388  614803 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.004006 seconds
	I0226 11:45:38.111500  614803 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0226 11:45:38.111639  614803 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0226 11:45:38.111702  614803 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0226 11:45:38.111892  614803 kubeadm.go:322] [mark-control-plane] Marking the node addons-006797 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0226 11:45:38.111953  614803 kubeadm.go:322] [bootstrap-token] Using token: 8u5o1u.os64pgduvv6kzro6
	I0226 11:45:38.113755  614803 out.go:204]   - Configuring RBAC rules ...
	I0226 11:45:38.113882  614803 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0226 11:45:38.113974  614803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0226 11:45:38.114122  614803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0226 11:45:38.114258  614803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0226 11:45:38.114381  614803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0226 11:45:38.114491  614803 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0226 11:45:38.114613  614803 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0226 11:45:38.114661  614803 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0226 11:45:38.114712  614803 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0226 11:45:38.114720  614803 kubeadm.go:322] 
	I0226 11:45:38.114782  614803 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0226 11:45:38.114790  614803 kubeadm.go:322] 
	I0226 11:45:38.114869  614803 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0226 11:45:38.114877  614803 kubeadm.go:322] 
	I0226 11:45:38.114904  614803 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0226 11:45:38.114968  614803 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0226 11:45:38.115023  614803 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0226 11:45:38.115031  614803 kubeadm.go:322] 
	I0226 11:45:38.115086  614803 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0226 11:45:38.115094  614803 kubeadm.go:322] 
	I0226 11:45:38.115143  614803 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0226 11:45:38.115151  614803 kubeadm.go:322] 
	I0226 11:45:38.115204  614803 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0226 11:45:38.115288  614803 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0226 11:45:38.115362  614803 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0226 11:45:38.115370  614803 kubeadm.go:322] 
	I0226 11:45:38.115456  614803 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0226 11:45:38.115538  614803 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0226 11:45:38.115546  614803 kubeadm.go:322] 
	I0226 11:45:38.115639  614803 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8u5o1u.os64pgduvv6kzro6 \
	I0226 11:45:38.115751  614803 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4951039124412052416f64387a7476aba3429f2071dfaa9a882b475b36ccdccb \
	I0226 11:45:38.115775  614803 kubeadm.go:322] 	--control-plane 
	I0226 11:45:38.115783  614803 kubeadm.go:322] 
	I0226 11:45:38.115871  614803 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0226 11:45:38.115879  614803 kubeadm.go:322] 
	I0226 11:45:38.115963  614803 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8u5o1u.os64pgduvv6kzro6 \
	I0226 11:45:38.116084  614803 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4951039124412052416f64387a7476aba3429f2071dfaa9a882b475b36ccdccb 
	I0226 11:45:38.116097  614803 cni.go:84] Creating CNI manager for ""
	I0226 11:45:38.116105  614803 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 11:45:38.118118  614803 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0226 11:45:38.119775  614803 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0226 11:45:38.129240  614803 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0226 11:45:38.129264  614803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0226 11:45:38.164078  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0226 11:45:39.050704  614803 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0226 11:45:39.050878  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:39.050972  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4011915ad0e9b27ff42994854397cc2ed93516c6 minikube.k8s.io/name=addons-006797 minikube.k8s.io/updated_at=2024_02_26T11_45_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:39.241919  614803 ops.go:34] apiserver oom_adj: -16
	I0226 11:45:39.242015  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:39.743031  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:40.242161  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:40.742504  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:41.242462  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:41.742665  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:42.242224  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:42.743126  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:43.242936  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:43.742136  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:44.243050  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:44.742572  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:45.242357  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:45.742695  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:46.242739  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:46.742508  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:47.243024  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:47.742252  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:48.242719  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:48.742260  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:49.242964  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:49.743100  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:50.242743  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:50.742171  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:51.242190  614803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:45:51.349248  614803 kubeadm.go:1088] duration metric: took 12.298438007s to wait for elevateKubeSystemPrivileges.
	I0226 11:45:51.349274  614803 kubeadm.go:406] StartCluster complete in 30.316926111s
	I0226 11:45:51.349292  614803 settings.go:142] acquiring lock: {Name:mk1588246e1eeb31f86f63cf3c470d51f6fe64da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:45:51.349400  614803 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 11:45:51.349811  614803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/kubeconfig: {Name:mk0efe1f972316757632066327a27c71356b5734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:45:51.351359  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0226 11:45:51.351498  614803 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0226 11:45:51.351604  614803 addons.go:69] Setting yakd=true in profile "addons-006797"
	I0226 11:45:51.351638  614803 addons.go:234] Setting addon yakd=true in "addons-006797"
	I0226 11:45:51.351682  614803 host.go:66] Checking if "addons-006797" exists ...
	I0226 11:45:51.351874  614803 config.go:182] Loaded profile config "addons-006797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 11:45:51.351921  614803 addons.go:69] Setting inspektor-gadget=true in profile "addons-006797"
	I0226 11:45:51.351932  614803 addons.go:234] Setting addon inspektor-gadget=true in "addons-006797"
	I0226 11:45:51.351965  614803 host.go:66] Checking if "addons-006797" exists ...
	I0226 11:45:51.352403  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.352697  614803 addons.go:69] Setting metrics-server=true in profile "addons-006797"
	I0226 11:45:51.352734  614803 addons.go:234] Setting addon metrics-server=true in "addons-006797"
	I0226 11:45:51.352784  614803 host.go:66] Checking if "addons-006797" exists ...
	I0226 11:45:51.352882  614803 addons.go:69] Setting cloud-spanner=true in profile "addons-006797"
	I0226 11:45:51.352895  614803 addons.go:234] Setting addon cloud-spanner=true in "addons-006797"
	I0226 11:45:51.352923  614803 host.go:66] Checking if "addons-006797" exists ...
	I0226 11:45:51.353441  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.354039  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.355724  614803 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-006797"
	I0226 11:45:51.355787  614803 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-006797"
	I0226 11:45:51.355827  614803 host.go:66] Checking if "addons-006797" exists ...
	I0226 11:45:51.356254  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.363491  614803 addons.go:69] Setting default-storageclass=true in profile "addons-006797"
	I0226 11:45:51.363527  614803 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-006797"
	I0226 11:45:51.363898  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.364335  614803 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-006797"
	I0226 11:45:51.364359  614803 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-006797"
	I0226 11:45:51.364409  614803 host.go:66] Checking if "addons-006797" exists ...
	I0226 11:45:51.365068  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.376868  614803 addons.go:69] Setting gcp-auth=true in profile "addons-006797"
	I0226 11:45:51.376905  614803 mustload.go:65] Loading cluster: addons-006797
	I0226 11:45:51.377113  614803 config.go:182] Loaded profile config "addons-006797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 11:45:51.377377  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.382955  614803 addons.go:69] Setting registry=true in profile "addons-006797"
	I0226 11:45:51.382991  614803 addons.go:234] Setting addon registry=true in "addons-006797"
	I0226 11:45:51.383045  614803 host.go:66] Checking if "addons-006797" exists ...
	I0226 11:45:51.383502  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.396828  614803 addons.go:69] Setting ingress=true in profile "addons-006797"
	I0226 11:45:51.396867  614803 addons.go:234] Setting addon ingress=true in "addons-006797"
	I0226 11:45:51.396935  614803 host.go:66] Checking if "addons-006797" exists ...
	I0226 11:45:51.397493  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.424215  614803 addons.go:69] Setting storage-provisioner=true in profile "addons-006797"
	I0226 11:45:51.424264  614803 addons.go:234] Setting addon storage-provisioner=true in "addons-006797"
	I0226 11:45:51.424351  614803 host.go:66] Checking if "addons-006797" exists ...
	I0226 11:45:51.433294  614803 addons.go:69] Setting ingress-dns=true in profile "addons-006797"
	I0226 11:45:51.433378  614803 addons.go:234] Setting addon ingress-dns=true in "addons-006797"
	I0226 11:45:51.433492  614803 host.go:66] Checking if "addons-006797" exists ...
	I0226 11:45:51.435557  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.446783  614803 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-006797"
	I0226 11:45:51.446870  614803 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-006797"
	I0226 11:45:51.447249  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.476951  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.492782  614803 addons.go:69] Setting volumesnapshots=true in profile "addons-006797"
	I0226 11:45:51.492858  614803 addons.go:234] Setting addon volumesnapshots=true in "addons-006797"
	I0226 11:45:51.492946  614803 host.go:66] Checking if "addons-006797" exists ...
	I0226 11:45:51.493558  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.515136  614803 out.go:177]   - Using image docker.io/registry:2.8.3
	I0226 11:45:51.520354  614803 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0226 11:45:51.522344  614803 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0226 11:45:51.522368  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0226 11:45:51.522440  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:51.586171  614803 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0226 11:45:51.598896  614803 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0226 11:45:51.598936  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0226 11:45:51.599020  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:51.612524  614803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0226 11:45:51.614847  614803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0226 11:45:51.585916  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.615999  614803 host.go:66] Checking if "addons-006797" exists ...
	I0226 11:45:51.617085  614803 addons.go:234] Setting addon default-storageclass=true in "addons-006797"
	I0226 11:45:51.622452  614803 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0226 11:45:51.626230  614803 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0226 11:45:51.626239  614803 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0226 11:45:51.642110  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:51.642397  614803 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0226 11:45:51.642403  614803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.6
	I0226 11:45:51.642587  614803 host.go:66] Checking if "addons-006797" exists ...
	I0226 11:45:51.646396  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.664162  614803 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.4
	I0226 11:45:51.667981  614803 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0226 11:45:51.668002  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0226 11:45:51.668071  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:51.669168  614803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0226 11:45:51.671941  614803 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0226 11:45:51.673805  614803 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0226 11:45:51.675767  614803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0226 11:45:51.677496  614803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0226 11:45:51.679244  614803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0226 11:45:51.680992  614803 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0226 11:45:51.681013  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0226 11:45:51.681085  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:51.690841  614803 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0226 11:45:51.690867  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0226 11:45:51.690937  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:51.723319  614803 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0226 11:45:51.731036  614803 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0226 11:45:51.731343  614803 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0226 11:45:51.731369  614803 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0226 11:45:51.732617  614803 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-006797"
	I0226 11:45:51.732636  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0226 11:45:51.735010  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0226 11:45:51.735080  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0226 11:45:51.735095  614803 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0226 11:45:51.737243  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0226 11:45:51.737373  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:51.737652  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:51.747518  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:51.747740  614803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0226 11:45:51.750480  614803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0226 11:45:51.758668  614803 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0226 11:45:51.758690  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0226 11:45:51.758773  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:51.747976  614803 host.go:66] Checking if "addons-006797" exists ...
	I0226 11:45:51.762843  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:51.748127  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:51.779945  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:51.848846  614803 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:45:51.858867  614803 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 11:45:51.859098  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0226 11:45:51.859262  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:51.879203  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:51.882224  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0226 11:45:51.900320  614803 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0226 11:45:51.900344  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0226 11:45:51.900420  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:51.927607  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:51.946961  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:51.951137  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:51.962430  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:51.994766  614803 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0226 11:45:51.996928  614803 out.go:177]   - Using image docker.io/busybox:stable
	I0226 11:45:51.998950  614803 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0226 11:45:51.998977  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0226 11:45:51.999064  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:52.017327  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:52.020848  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:52.033847  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:52.047928  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:52.073489  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:52.103409  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:52.172570  614803 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0226 11:45:52.172602  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0226 11:45:52.197069  614803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0226 11:45:52.297642  614803 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-006797" context rescaled to 1 replicas
	I0226 11:45:52.297685  614803 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0226 11:45:52.299952  614803 out.go:177] * Verifying Kubernetes components...
	I0226 11:45:52.302737  614803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 11:45:52.386369  614803 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0226 11:45:52.386392  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0226 11:45:52.397280  614803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0226 11:45:52.411407  614803 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0226 11:45:52.411441  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0226 11:45:52.459458  614803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0226 11:45:52.511783  614803 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0226 11:45:52.511809  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0226 11:45:52.519727  614803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0226 11:45:52.526465  614803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 11:45:52.529765  614803 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0226 11:45:52.529789  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0226 11:45:52.551314  614803 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0226 11:45:52.551343  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0226 11:45:52.570279  614803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0226 11:45:52.630657  614803 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0226 11:45:52.630684  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0226 11:45:52.636447  614803 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0226 11:45:52.636483  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0226 11:45:52.677460  614803 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0226 11:45:52.677489  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0226 11:45:52.680107  614803 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0226 11:45:52.680146  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0226 11:45:52.682280  614803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0226 11:45:52.685164  614803 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0226 11:45:52.685189  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0226 11:45:52.687810  614803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0226 11:45:52.724888  614803 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0226 11:45:52.724915  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0226 11:45:52.796481  614803 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0226 11:45:52.796525  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0226 11:45:52.831623  614803 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0226 11:45:52.831659  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0226 11:45:52.874211  614803 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0226 11:45:52.874236  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0226 11:45:52.874846  614803 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0226 11:45:52.874864  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0226 11:45:52.878814  614803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0226 11:45:52.976180  614803 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0226 11:45:52.976208  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0226 11:45:53.046732  614803 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0226 11:45:53.046759  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0226 11:45:53.077678  614803 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0226 11:45:53.077710  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0226 11:45:53.078969  614803 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0226 11:45:53.078992  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0226 11:45:53.146459  614803 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0226 11:45:53.146485  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0226 11:45:53.219632  614803 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0226 11:45:53.219662  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0226 11:45:53.278058  614803 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0226 11:45:53.278089  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0226 11:45:53.296869  614803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0226 11:45:53.329162  614803 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0226 11:45:53.329190  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0226 11:45:53.400274  614803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0226 11:45:53.412527  614803 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0226 11:45:53.412557  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0226 11:45:53.469276  614803 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0226 11:45:53.469302  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0226 11:45:53.489755  614803 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0226 11:45:53.489781  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0226 11:45:53.557614  614803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0226 11:45:53.560528  614803 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0226 11:45:53.560560  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0226 11:45:53.621427  614803 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0226 11:45:53.621453  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0226 11:45:53.775102  614803 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0226 11:45:53.775136  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0226 11:45:53.929362  614803 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0226 11:45:53.929390  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0226 11:45:54.104423  614803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0226 11:45:54.696416  614803 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.814131997s)
	I0226 11:45:54.696550  614803 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0226 11:45:56.239869  614803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.042758017s)
	I0226 11:45:56.239984  614803 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.937222187s)
	I0226 11:45:56.240286  614803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.842980846s)
	I0226 11:45:56.241115  614803 node_ready.go:35] waiting up to 6m0s for node "addons-006797" to be "Ready" ...
	I0226 11:45:56.777799  614803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.318301199s)
	I0226 11:45:56.777869  614803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.258116748s)
	I0226 11:45:57.739075  614803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.212571889s)
	I0226 11:45:58.247438  614803 node_ready.go:58] node "addons-006797" has status "Ready":"False"
	I0226 11:45:58.496994  614803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.926675558s)
	I0226 11:45:58.497039  614803 addons.go:470] Verifying addon ingress=true in "addons-006797"
	I0226 11:45:58.499134  614803 out.go:177] * Verifying ingress addon...
	I0226 11:45:58.497191  614803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.814878747s)
	I0226 11:45:58.497234  614803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.809402752s)
	I0226 11:45:58.497287  614803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.61845309s)
	I0226 11:45:58.497359  614803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.200462819s)
	I0226 11:45:58.497473  614803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.939829002s)
	I0226 11:45:58.497575  614803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.097109298s)
	I0226 11:45:58.499292  614803 addons.go:470] Verifying addon registry=true in "addons-006797"
	I0226 11:45:58.502838  614803 out.go:177] * Verifying registry addon...
	I0226 11:45:58.499487  614803 addons.go:470] Verifying addon metrics-server=true in "addons-006797"
	W0226 11:45:58.499659  614803 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0226 11:45:58.505524  614803 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0226 11:45:58.505553  614803 retry.go:31] will retry after 164.862799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0226 11:45:58.506613  614803 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-006797 service yakd-dashboard -n yakd-dashboard
	
	I0226 11:45:58.507766  614803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0226 11:45:58.521235  614803 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0226 11:45:58.521315  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:45:58.522642  614803 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0226 11:45:58.522703  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:45:58.673194  614803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0226 11:45:58.792906  614803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.688416413s)
	I0226 11:45:58.792988  614803 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-006797"
	I0226 11:45:58.796146  614803 out.go:177] * Verifying csi-hostpath-driver addon...
	I0226 11:45:58.799824  614803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0226 11:45:58.806461  614803 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0226 11:45:58.806533  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:45:58.842976  614803 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0226 11:45:58.843131  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:58.867887  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:58.987389  614803 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0226 11:45:59.011416  614803 addons.go:234] Setting addon gcp-auth=true in "addons-006797"
	I0226 11:45:59.011471  614803 host.go:66] Checking if "addons-006797" exists ...
	I0226 11:45:59.011939  614803 cli_runner.go:164] Run: docker container inspect addons-006797 --format={{.State.Status}}
	I0226 11:45:59.032056  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:45:59.032415  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:45:59.038662  614803 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0226 11:45:59.038722  614803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006797
	I0226 11:45:59.064771  614803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36801 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/addons-006797/id_rsa Username:docker}
	I0226 11:45:59.316232  614803 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0226 11:45:59.316302  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:45:59.527777  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:45:59.548252  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:45:59.806953  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:00.024034  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:00.026498  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:00.277105  614803 node_ready.go:58] node "addons-006797" has status "Ready":"False"
	I0226 11:46:00.318258  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:00.533887  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:00.535220  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:00.719398  614803 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.68069656s)
	I0226 11:46:00.719406  614803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.046118121s)
	I0226 11:46:00.721858  614803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0226 11:46:00.724500  614803 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0226 11:46:00.726811  614803 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0226 11:46:00.726877  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0226 11:46:00.787270  614803 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0226 11:46:00.787293  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0226 11:46:00.804959  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:00.839782  614803 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0226 11:46:00.839849  614803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0226 11:46:00.891079  614803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0226 11:46:01.037803  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:01.039806  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:01.305066  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:01.511935  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:01.515004  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:01.805411  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:02.037170  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:02.039048  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:02.227310  614803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.336148188s)
	I0226 11:46:02.231274  614803 addons.go:470] Verifying addon gcp-auth=true in "addons-006797"
	I0226 11:46:02.234943  614803 out.go:177] * Verifying gcp-auth addon...
	I0226 11:46:02.241846  614803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0226 11:46:02.277518  614803 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0226 11:46:02.277584  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:02.316094  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:02.513397  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:02.515464  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:02.747235  614803 node_ready.go:58] node "addons-006797" has status "Ready":"False"
	I0226 11:46:02.747760  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:02.805480  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:03.024741  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:03.027064  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:03.246934  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:03.305207  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:03.513949  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:03.515823  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:03.748799  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:03.804579  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:04.031138  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:04.032207  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:04.251026  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:04.306544  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:04.514927  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:04.517466  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:04.748726  614803 node_ready.go:58] node "addons-006797" has status "Ready":"False"
	I0226 11:46:04.753239  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:04.804644  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:05.026706  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:05.026981  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:05.246243  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:05.304137  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:05.512862  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:05.513525  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:05.747988  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:05.805666  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:06.019368  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:06.022176  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:06.247326  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:06.304569  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:06.513754  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:06.516068  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:06.745587  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:06.805689  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:07.013787  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:07.017467  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:07.245184  614803 node_ready.go:58] node "addons-006797" has status "Ready":"False"
	I0226 11:46:07.246623  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:07.304458  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:07.513354  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:07.515006  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:07.745562  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:07.804876  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:08.016412  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:08.017266  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:08.246221  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:08.304364  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:08.512330  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:08.513919  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:08.745140  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:08.804649  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:09.013818  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:09.016927  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:09.245484  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:09.304394  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:09.512059  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:09.514938  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:09.744658  614803 node_ready.go:58] node "addons-006797" has status "Ready":"False"
	I0226 11:46:09.745609  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:09.804810  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:10.018422  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:10.023797  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:10.246088  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:10.305055  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:10.512070  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:10.513765  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:10.745536  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:10.805031  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:11.013148  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:11.015711  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:11.245658  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:11.305037  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:11.512895  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:11.513924  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:11.745824  614803 node_ready.go:58] node "addons-006797" has status "Ready":"False"
	I0226 11:46:11.746231  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:11.804279  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:12.025959  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:12.027337  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:12.245844  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:12.304193  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:12.511635  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:12.514932  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:12.746388  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:12.804360  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:13.015021  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:13.019734  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:13.245001  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:13.305136  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:13.512578  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:13.514572  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:13.746663  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:13.804772  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:14.018022  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:14.019446  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:14.245207  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:14.245606  614803 node_ready.go:58] node "addons-006797" has status "Ready":"False"
	I0226 11:46:14.304800  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:14.514125  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:14.515156  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:14.746643  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:14.804850  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:15.021899  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:15.026459  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:15.246117  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:15.304903  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:15.512027  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:15.514907  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:15.745651  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:15.804962  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:16.017618  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:16.026745  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:16.246105  614803 node_ready.go:58] node "addons-006797" has status "Ready":"False"
	I0226 11:46:16.246630  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:16.304996  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:16.513604  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:16.514471  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:16.745753  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:16.805204  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:17.013556  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:17.015135  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:17.246541  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:17.305008  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:17.512630  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:17.514370  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:17.746730  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:17.804909  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:18.014245  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:18.017305  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:18.246761  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:18.304919  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:18.511950  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:18.514087  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:18.745626  614803 node_ready.go:58] node "addons-006797" has status "Ready":"False"
	I0226 11:46:18.746645  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:18.806343  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:19.013694  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:19.017747  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:19.246678  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:19.305081  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:19.513908  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:19.515809  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:19.745907  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:19.804817  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:20.018359  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:20.019824  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:20.246269  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:20.304641  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:20.513258  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:20.514191  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:20.746162  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:20.804292  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:21.021595  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:21.023065  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:21.244603  614803 node_ready.go:58] node "addons-006797" has status "Ready":"False"
	I0226 11:46:21.245687  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:21.304577  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:21.518381  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:21.519381  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:21.745445  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:21.805653  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:22.014318  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:22.015153  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:22.245803  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:22.304920  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:22.513482  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:22.514313  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:22.746064  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:22.804373  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:23.013688  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:23.016652  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:23.246352  614803 node_ready.go:58] node "addons-006797" has status "Ready":"False"
	I0226 11:46:23.247019  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:23.304955  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:23.512785  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:23.512941  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:23.746149  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:23.804431  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:24.014526  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:24.017841  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:24.246027  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:24.312632  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:24.511797  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:24.515527  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:24.747296  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:24.804773  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:25.014037  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:25.019208  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:25.247314  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:25.305485  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:25.512315  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:25.514712  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:25.747013  614803 node_ready.go:58] node "addons-006797" has status "Ready":"False"
	I0226 11:46:25.747354  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:25.805203  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:26.014338  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:26.017049  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:26.246704  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:26.304660  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:26.512992  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:26.514554  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:26.746591  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:26.804820  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:27.014604  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:27.017458  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:27.245567  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:27.304890  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:27.513999  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:27.517928  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:27.782428  614803 node_ready.go:49] node "addons-006797" has status "Ready":"True"
	I0226 11:46:27.782452  614803 node_ready.go:38] duration metric: took 31.541280102s waiting for node "addons-006797" to be "Ready" ...
	I0226 11:46:27.782463  614803 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 11:46:27.787305  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:27.802608  614803 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2n2bn" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:27.821447  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:28.123939  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:28.139884  614803 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0226 11:46:28.139910  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:28.249899  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:28.314163  614803 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0226 11:46:28.314191  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:28.516535  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:28.523645  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:28.756050  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:28.820487  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:29.016063  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:29.016195  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:29.246315  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:29.308788  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:29.521554  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:29.532715  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:29.748175  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:29.818011  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:29.822787  614803 pod_ready.go:102] pod "coredns-5dd5756b68-2n2bn" in "kube-system" namespace has status "Ready":"False"
	I0226 11:46:30.031586  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:30.049268  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:30.246611  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:30.324565  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:30.331379  614803 pod_ready.go:92] pod "coredns-5dd5756b68-2n2bn" in "kube-system" namespace has status "Ready":"True"
	I0226 11:46:30.331416  614803 pod_ready.go:81] duration metric: took 2.528760862s waiting for pod "coredns-5dd5756b68-2n2bn" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:30.331436  614803 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-006797" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:30.339683  614803 pod_ready.go:92] pod "etcd-addons-006797" in "kube-system" namespace has status "Ready":"True"
	I0226 11:46:30.339711  614803 pod_ready.go:81] duration metric: took 8.26546ms waiting for pod "etcd-addons-006797" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:30.339737  614803 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-006797" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:30.347357  614803 pod_ready.go:92] pod "kube-apiserver-addons-006797" in "kube-system" namespace has status "Ready":"True"
	I0226 11:46:30.347393  614803 pod_ready.go:81] duration metric: took 7.644614ms waiting for pod "kube-apiserver-addons-006797" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:30.347406  614803 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-006797" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:30.354051  614803 pod_ready.go:92] pod "kube-controller-manager-addons-006797" in "kube-system" namespace has status "Ready":"True"
	I0226 11:46:30.354078  614803 pod_ready.go:81] duration metric: took 6.663178ms waiting for pod "kube-controller-manager-addons-006797" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:30.354093  614803 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5xmt7" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:30.360656  614803 pod_ready.go:92] pod "kube-proxy-5xmt7" in "kube-system" namespace has status "Ready":"True"
	I0226 11:46:30.360704  614803 pod_ready.go:81] duration metric: took 6.593502ms waiting for pod "kube-proxy-5xmt7" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:30.360716  614803 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-006797" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:30.518710  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:30.519709  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:30.708090  614803 pod_ready.go:92] pod "kube-scheduler-addons-006797" in "kube-system" namespace has status "Ready":"True"
	I0226 11:46:30.708125  614803 pod_ready.go:81] duration metric: took 347.400222ms waiting for pod "kube-scheduler-addons-006797" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:30.708141  614803 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-4r7kf" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:30.747393  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:30.806433  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:31.016159  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:31.054463  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:31.247127  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:31.323428  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:31.513262  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:31.523774  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:31.746161  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:31.806927  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:32.021005  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:32.026886  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:32.246203  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:32.309241  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:32.513883  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:32.520530  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:32.719187  614803 pod_ready.go:102] pod "metrics-server-69cf46c98-4r7kf" in "kube-system" namespace has status "Ready":"False"
	I0226 11:46:32.747412  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:32.807936  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:33.033302  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:33.035540  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:33.248645  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:33.307298  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:33.526411  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:33.527671  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:33.746794  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:33.806829  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:34.023776  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:34.033102  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:34.253680  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:34.308035  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:34.513353  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:34.514588  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:34.746438  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:34.805547  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:35.017249  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:35.020424  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:35.214906  614803 pod_ready.go:102] pod "metrics-server-69cf46c98-4r7kf" in "kube-system" namespace has status "Ready":"False"
	I0226 11:46:35.245473  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:35.308188  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:35.517038  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:35.520636  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:35.747304  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:35.806716  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:36.031287  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:36.033019  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:36.246605  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:36.308616  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:36.518116  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:36.518746  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:36.747221  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:36.808874  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:37.012864  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:37.023013  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:37.215396  614803 pod_ready.go:102] pod "metrics-server-69cf46c98-4r7kf" in "kube-system" namespace has status "Ready":"False"
	I0226 11:46:37.249277  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:37.311207  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:37.513767  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:37.517805  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:37.746264  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:37.807404  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:38.013687  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:38.018375  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:38.246751  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:38.318024  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:38.515450  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:38.515509  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:38.745783  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:38.805885  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:39.016438  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:39.018130  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:39.225038  614803 pod_ready.go:102] pod "metrics-server-69cf46c98-4r7kf" in "kube-system" namespace has status "Ready":"False"
	I0226 11:46:39.248756  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:39.308999  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:39.512383  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:39.520428  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:39.746660  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:39.806490  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:40.045447  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:40.045713  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:40.245524  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:40.306429  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:40.512758  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:40.514845  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:40.747602  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:40.806223  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:41.015495  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:41.016666  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:41.245820  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:41.309837  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:41.514247  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:41.518716  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:41.718352  614803 pod_ready.go:102] pod "metrics-server-69cf46c98-4r7kf" in "kube-system" namespace has status "Ready":"False"
	I0226 11:46:41.747224  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:41.807110  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:42.022356  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:42.031769  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:42.248586  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:42.306318  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:42.519271  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:42.520917  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:42.745952  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:42.806116  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:43.012587  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:43.017639  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:43.246487  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:43.307935  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:43.512445  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:43.515596  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:43.746571  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:43.809588  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:44.026092  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:44.034299  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:44.221012  614803 pod_ready.go:102] pod "metrics-server-69cf46c98-4r7kf" in "kube-system" namespace has status "Ready":"False"
	I0226 11:46:44.247690  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:44.306993  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:44.531852  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:44.543961  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:44.746205  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:44.806488  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:45.023906  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:45.025079  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:45.246922  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:45.310598  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:45.512803  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:45.515909  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:45.747545  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:45.807368  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:46.034197  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:46.040266  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:46.247034  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:46.308544  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:46.514575  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:46.518901  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:46.719788  614803 pod_ready.go:102] pod "metrics-server-69cf46c98-4r7kf" in "kube-system" namespace has status "Ready":"False"
	I0226 11:46:46.746417  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:46.806659  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:47.013092  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:47.020264  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:47.248253  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:47.309238  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:47.512842  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:47.515020  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:47.746595  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:47.806183  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:48.016074  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:48.019395  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:48.245586  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:48.306640  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:48.531626  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:48.534444  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:48.746001  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:48.809821  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:49.026681  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:49.029870  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:49.215666  614803 pod_ready.go:102] pod "metrics-server-69cf46c98-4r7kf" in "kube-system" namespace has status "Ready":"False"
	I0226 11:46:49.255837  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:49.307803  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:49.512604  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:49.517358  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:49.746686  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:49.806973  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:50.059632  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:50.089329  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:50.245983  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:50.305921  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:50.517723  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:50.518866  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:50.746465  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:50.806107  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:51.022735  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:51.028462  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:51.217704  614803 pod_ready.go:102] pod "metrics-server-69cf46c98-4r7kf" in "kube-system" namespace has status "Ready":"False"
	I0226 11:46:51.246097  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:51.306056  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:51.541618  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:51.544770  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:51.715540  614803 pod_ready.go:92] pod "metrics-server-69cf46c98-4r7kf" in "kube-system" namespace has status "Ready":"True"
	I0226 11:46:51.715569  614803 pod_ready.go:81] duration metric: took 21.007420009s waiting for pod "metrics-server-69cf46c98-4r7kf" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:51.715582  614803 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-z98fk" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:51.745691  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:51.806354  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:52.023150  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:52.023895  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:52.246037  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:52.305803  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:52.521830  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:52.522506  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:52.746337  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:52.814234  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:53.026053  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:53.031726  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:53.245829  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:53.305932  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:53.511837  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:53.527054  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:53.722770  614803 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-z98fk" in "kube-system" namespace has status "Ready":"False"
	I0226 11:46:53.746822  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:53.805738  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:54.017334  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:54.027875  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:54.253376  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:54.307310  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:54.527442  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:54.535851  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:54.747978  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:54.807790  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:55.021411  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:55.022606  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:55.248285  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:55.305436  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:55.515871  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:55.519602  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:55.726872  614803 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-z98fk" in "kube-system" namespace has status "Ready":"False"
	I0226 11:46:55.746010  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:55.806437  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:56.015940  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:56.033627  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:56.246578  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:56.307734  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:56.515192  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:56.516241  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:56.747234  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:56.810380  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:57.014740  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:57.017024  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:57.246371  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:57.306496  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:57.516280  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:57.517915  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:57.734180  614803 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-z98fk" in "kube-system" namespace has status "Ready":"True"
	I0226 11:46:57.734208  614803 pod_ready.go:81] duration metric: took 6.018597758s waiting for pod "nvidia-device-plugin-daemonset-z98fk" in "kube-system" namespace to be "Ready" ...
	I0226 11:46:57.734234  614803 pod_ready.go:38] duration metric: took 29.951757279s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 11:46:57.734252  614803 api_server.go:52] waiting for apiserver process to appear ...
	I0226 11:46:57.734284  614803 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0226 11:46:57.734349  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0226 11:46:57.745908  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:57.801472  614803 cri.go:89] found id: "04d4b11ac34d15fcc60a3b6bee6bf3355bcc03aa6d7fcdecdc9ec4aef9323da7"
	I0226 11:46:57.801575  614803 cri.go:89] found id: ""
	I0226 11:46:57.801612  614803 logs.go:276] 1 containers: [04d4b11ac34d15fcc60a3b6bee6bf3355bcc03aa6d7fcdecdc9ec4aef9323da7]
	I0226 11:46:57.801686  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:46:57.806005  614803 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0226 11:46:57.806094  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0226 11:46:57.808043  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:57.877709  614803 cri.go:89] found id: "e9505d37afa43999407f50624df81d16bb3d9c65f9d4e68fdb2d850d71cbdd8e"
	I0226 11:46:57.877731  614803 cri.go:89] found id: ""
	I0226 11:46:57.877740  614803 logs.go:276] 1 containers: [e9505d37afa43999407f50624df81d16bb3d9c65f9d4e68fdb2d850d71cbdd8e]
	I0226 11:46:57.877806  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:46:57.882510  614803 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0226 11:46:57.882611  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0226 11:46:57.933891  614803 cri.go:89] found id: "be0efb2580c536cde20c9dde56a98927c52fde83af8af15f34bbc14153b97e41"
	I0226 11:46:57.933968  614803 cri.go:89] found id: ""
	I0226 11:46:57.933992  614803 logs.go:276] 1 containers: [be0efb2580c536cde20c9dde56a98927c52fde83af8af15f34bbc14153b97e41]
	I0226 11:46:57.934089  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:46:57.939214  614803 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0226 11:46:57.939338  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0226 11:46:58.025465  614803 cri.go:89] found id: "48715769dded4613361b91993bb07e4ea5ded8a791ca6aeb3c58840e3064365e"
	I0226 11:46:58.025539  614803 cri.go:89] found id: ""
	I0226 11:46:58.025562  614803 logs.go:276] 1 containers: [48715769dded4613361b91993bb07e4ea5ded8a791ca6aeb3c58840e3064365e]
	I0226 11:46:58.025648  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:46:58.036096  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:58.037584  614803 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0226 11:46:58.037656  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0226 11:46:58.042932  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:58.121743  614803 cri.go:89] found id: "66663f67cb76bb86d3bebec7cfb039fc4d214c3b7eb5af78929fab27a7421d68"
	I0226 11:46:58.121766  614803 cri.go:89] found id: ""
	I0226 11:46:58.121774  614803 logs.go:276] 1 containers: [66663f67cb76bb86d3bebec7cfb039fc4d214c3b7eb5af78929fab27a7421d68]
	I0226 11:46:58.121829  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:46:58.125615  614803 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0226 11:46:58.125688  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0226 11:46:58.164873  614803 cri.go:89] found id: "0bd9d07570c891f4b4769ab8e5adf03a13f9f164ef8538d6424930f60a41d810"
	I0226 11:46:58.164900  614803 cri.go:89] found id: ""
	I0226 11:46:58.164909  614803 logs.go:276] 1 containers: [0bd9d07570c891f4b4769ab8e5adf03a13f9f164ef8538d6424930f60a41d810]
	I0226 11:46:58.164989  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:46:58.169108  614803 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0226 11:46:58.169209  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0226 11:46:58.220247  614803 cri.go:89] found id: "11c57a321f69b62844de5ad0e3677c6e80e99a92eec6d85538f7f35e5895e787"
	I0226 11:46:58.220272  614803 cri.go:89] found id: ""
	I0226 11:46:58.220280  614803 logs.go:276] 1 containers: [11c57a321f69b62844de5ad0e3677c6e80e99a92eec6d85538f7f35e5895e787]
	I0226 11:46:58.220343  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:46:58.223866  614803 logs.go:123] Gathering logs for kubelet ...
	I0226 11:46:58.223893  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 11:46:58.253452  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0226 11:46:58.285930  614803 logs.go:138] Found kubelet problem: Feb 26 11:46:27 addons-006797 kubelet[1354]: W0226 11:46:27.756941    1354 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006797" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006797' and this object
	W0226 11:46:58.286181  614803 logs.go:138] Found kubelet problem: Feb 26 11:46:27 addons-006797 kubelet[1354]: E0226 11:46:27.757003    1354 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006797" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006797' and this object
	I0226 11:46:58.310916  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:58.318202  614803 logs.go:123] Gathering logs for dmesg ...
	I0226 11:46:58.318239  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:46:58.337932  614803 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:46:58.337973  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0226 11:46:58.512736  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:58.517696  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:58.542286  614803 logs.go:123] Gathering logs for kube-scheduler [48715769dded4613361b91993bb07e4ea5ded8a791ca6aeb3c58840e3064365e] ...
	I0226 11:46:58.542362  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48715769dded4613361b91993bb07e4ea5ded8a791ca6aeb3c58840e3064365e"
	I0226 11:46:58.590675  614803 logs.go:123] Gathering logs for kube-proxy [66663f67cb76bb86d3bebec7cfb039fc4d214c3b7eb5af78929fab27a7421d68] ...
	I0226 11:46:58.590709  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66663f67cb76bb86d3bebec7cfb039fc4d214c3b7eb5af78929fab27a7421d68"
	I0226 11:46:58.637527  614803 logs.go:123] Gathering logs for kube-controller-manager [0bd9d07570c891f4b4769ab8e5adf03a13f9f164ef8538d6424930f60a41d810] ...
	I0226 11:46:58.637556  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0bd9d07570c891f4b4769ab8e5adf03a13f9f164ef8538d6424930f60a41d810"
	I0226 11:46:58.755237  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:58.782630  614803 logs.go:123] Gathering logs for CRI-O ...
	I0226 11:46:58.782672  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0226 11:46:58.806707  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:58.931641  614803 logs.go:123] Gathering logs for kube-apiserver [04d4b11ac34d15fcc60a3b6bee6bf3355bcc03aa6d7fcdecdc9ec4aef9323da7] ...
	I0226 11:46:58.931976  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04d4b11ac34d15fcc60a3b6bee6bf3355bcc03aa6d7fcdecdc9ec4aef9323da7"
	I0226 11:46:59.011805  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:59.023550  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:59.204269  614803 logs.go:123] Gathering logs for etcd [e9505d37afa43999407f50624df81d16bb3d9c65f9d4e68fdb2d850d71cbdd8e] ...
	I0226 11:46:59.204351  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9505d37afa43999407f50624df81d16bb3d9c65f9d4e68fdb2d850d71cbdd8e"
	I0226 11:46:59.245735  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:59.306982  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:59.335536  614803 logs.go:123] Gathering logs for coredns [be0efb2580c536cde20c9dde56a98927c52fde83af8af15f34bbc14153b97e41] ...
	I0226 11:46:59.335612  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0efb2580c536cde20c9dde56a98927c52fde83af8af15f34bbc14153b97e41"
	I0226 11:46:59.517699  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:46:59.526738  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:46:59.533244  614803 logs.go:123] Gathering logs for kindnet [11c57a321f69b62844de5ad0e3677c6e80e99a92eec6d85538f7f35e5895e787] ...
	I0226 11:46:59.533322  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c57a321f69b62844de5ad0e3677c6e80e99a92eec6d85538f7f35e5895e787"
	I0226 11:46:59.642669  614803 logs.go:123] Gathering logs for container status ...
	I0226 11:46:59.642746  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:46:59.747522  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:46:59.817470  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:46:59.833980  614803 out.go:304] Setting ErrFile to fd 2...
	I0226 11:46:59.834673  614803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:46:59.834770  614803 out.go:239] X Problems detected in kubelet:
	W0226 11:46:59.834812  614803 out.go:239]   Feb 26 11:46:27 addons-006797 kubelet[1354]: W0226 11:46:27.756941    1354 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006797" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006797' and this object
	W0226 11:46:59.834982  614803 out.go:239]   Feb 26 11:46:27 addons-006797 kubelet[1354]: E0226 11:46:27.757003    1354 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006797" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006797' and this object
	I0226 11:46:59.835028  614803 out.go:304] Setting ErrFile to fd 2...
	I0226 11:46:59.835050  614803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:47:00.033893  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:00.040516  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:00.249374  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:00.313713  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:00.513927  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:00.520057  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:00.746814  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:00.807937  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:01.045428  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:01.048243  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:01.248231  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:01.308085  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:01.523141  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:01.524062  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:01.747372  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:01.812201  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:02.016709  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:02.023546  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:02.246611  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:02.308505  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:02.512064  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:02.515282  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:02.746658  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:02.808843  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:03.015340  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:03.016348  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:03.246940  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:03.306950  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:03.512111  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:03.518830  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:03.746147  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:03.806462  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:04.016528  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:04.020870  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:04.245939  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:04.306721  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:04.513072  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:04.515496  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:04.746530  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:04.806932  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:05.012843  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:05.016521  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:05.246260  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:05.306015  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:05.515150  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:05.516233  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:05.746964  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:05.807991  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:06.014208  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:06.015832  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:06.245851  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:06.307044  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:06.516819  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:06.520818  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:06.745874  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:06.806640  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:07.050183  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:07.061694  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:07.246270  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:07.306667  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:07.512256  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:07.516856  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:07.746443  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:07.807784  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:08.025689  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:08.050411  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:08.246535  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:08.348194  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:08.514384  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:08.518587  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:08.745951  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:08.809109  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:09.014227  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:09.019273  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:09.246212  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:09.305888  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:09.512574  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:09.515742  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0226 11:47:09.754263  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:09.807386  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:09.836645  614803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:47:09.852390  614803 api_server.go:72] duration metric: took 1m17.554644247s to wait for apiserver process to appear ...
	I0226 11:47:09.852418  614803 api_server.go:88] waiting for apiserver healthz status ...
	I0226 11:47:09.852453  614803 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0226 11:47:09.852513  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0226 11:47:09.894740  614803 cri.go:89] found id: "04d4b11ac34d15fcc60a3b6bee6bf3355bcc03aa6d7fcdecdc9ec4aef9323da7"
	I0226 11:47:09.894763  614803 cri.go:89] found id: ""
	I0226 11:47:09.894771  614803 logs.go:276] 1 containers: [04d4b11ac34d15fcc60a3b6bee6bf3355bcc03aa6d7fcdecdc9ec4aef9323da7]
	I0226 11:47:09.894827  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:47:09.899081  614803 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0226 11:47:09.899160  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0226 11:47:09.974433  614803 cri.go:89] found id: "e9505d37afa43999407f50624df81d16bb3d9c65f9d4e68fdb2d850d71cbdd8e"
	I0226 11:47:09.974455  614803 cri.go:89] found id: ""
	I0226 11:47:09.974463  614803 logs.go:276] 1 containers: [e9505d37afa43999407f50624df81d16bb3d9c65f9d4e68fdb2d850d71cbdd8e]
	I0226 11:47:09.974516  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:47:09.989097  614803 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0226 11:47:09.989166  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0226 11:47:10.033182  614803 kapi.go:107] duration metric: took 1m11.525412855s to wait for kubernetes.io/minikube-addons=registry ...
	I0226 11:47:10.034670  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:10.095812  614803 cri.go:89] found id: "be0efb2580c536cde20c9dde56a98927c52fde83af8af15f34bbc14153b97e41"
	I0226 11:47:10.095894  614803 cri.go:89] found id: ""
	I0226 11:47:10.095920  614803 logs.go:276] 1 containers: [be0efb2580c536cde20c9dde56a98927c52fde83af8af15f34bbc14153b97e41]
	I0226 11:47:10.096015  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:47:10.100259  614803 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0226 11:47:10.100423  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0226 11:47:10.149447  614803 cri.go:89] found id: "48715769dded4613361b91993bb07e4ea5ded8a791ca6aeb3c58840e3064365e"
	I0226 11:47:10.149530  614803 cri.go:89] found id: ""
	I0226 11:47:10.149555  614803 logs.go:276] 1 containers: [48715769dded4613361b91993bb07e4ea5ded8a791ca6aeb3c58840e3064365e]
	I0226 11:47:10.149640  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:47:10.156744  614803 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0226 11:47:10.156868  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0226 11:47:10.224981  614803 cri.go:89] found id: "66663f67cb76bb86d3bebec7cfb039fc4d214c3b7eb5af78929fab27a7421d68"
	I0226 11:47:10.225006  614803 cri.go:89] found id: ""
	I0226 11:47:10.225027  614803 logs.go:276] 1 containers: [66663f67cb76bb86d3bebec7cfb039fc4d214c3b7eb5af78929fab27a7421d68]
	I0226 11:47:10.225086  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:47:10.229471  614803 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0226 11:47:10.229549  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0226 11:47:10.246665  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:10.293674  614803 cri.go:89] found id: "0bd9d07570c891f4b4769ab8e5adf03a13f9f164ef8538d6424930f60a41d810"
	I0226 11:47:10.293699  614803 cri.go:89] found id: ""
	I0226 11:47:10.293708  614803 logs.go:276] 1 containers: [0bd9d07570c891f4b4769ab8e5adf03a13f9f164ef8538d6424930f60a41d810]
	I0226 11:47:10.293771  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:47:10.301373  614803 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0226 11:47:10.301453  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0226 11:47:10.310813  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:10.360225  614803 cri.go:89] found id: "11c57a321f69b62844de5ad0e3677c6e80e99a92eec6d85538f7f35e5895e787"
	I0226 11:47:10.360250  614803 cri.go:89] found id: ""
	I0226 11:47:10.360259  614803 logs.go:276] 1 containers: [11c57a321f69b62844de5ad0e3677c6e80e99a92eec6d85538f7f35e5895e787]
	I0226 11:47:10.360328  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:47:10.365859  614803 logs.go:123] Gathering logs for kube-controller-manager [0bd9d07570c891f4b4769ab8e5adf03a13f9f164ef8538d6424930f60a41d810] ...
	I0226 11:47:10.365884  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0bd9d07570c891f4b4769ab8e5adf03a13f9f164ef8538d6424930f60a41d810"
	I0226 11:47:10.470778  614803 logs.go:123] Gathering logs for kindnet [11c57a321f69b62844de5ad0e3677c6e80e99a92eec6d85538f7f35e5895e787] ...
	I0226 11:47:10.470819  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c57a321f69b62844de5ad0e3677c6e80e99a92eec6d85538f7f35e5895e787"
	I0226 11:47:10.512499  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:10.518155  614803 logs.go:123] Gathering logs for CRI-O ...
	I0226 11:47:10.518184  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0226 11:47:10.621330  614803 logs.go:123] Gathering logs for container status ...
	I0226 11:47:10.621417  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:47:10.673318  614803 logs.go:123] Gathering logs for dmesg ...
	I0226 11:47:10.673351  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:47:10.693787  614803 logs.go:123] Gathering logs for etcd [e9505d37afa43999407f50624df81d16bb3d9c65f9d4e68fdb2d850d71cbdd8e] ...
	I0226 11:47:10.693819  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9505d37afa43999407f50624df81d16bb3d9c65f9d4e68fdb2d850d71cbdd8e"
	I0226 11:47:10.750791  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:10.754123  614803 logs.go:123] Gathering logs for coredns [be0efb2580c536cde20c9dde56a98927c52fde83af8af15f34bbc14153b97e41] ...
	I0226 11:47:10.754172  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0efb2580c536cde20c9dde56a98927c52fde83af8af15f34bbc14153b97e41"
	I0226 11:47:10.799234  614803 logs.go:123] Gathering logs for kube-scheduler [48715769dded4613361b91993bb07e4ea5ded8a791ca6aeb3c58840e3064365e] ...
	I0226 11:47:10.799262  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48715769dded4613361b91993bb07e4ea5ded8a791ca6aeb3c58840e3064365e"
	I0226 11:47:10.807747  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:10.854293  614803 logs.go:123] Gathering logs for kubelet ...
	I0226 11:47:10.854335  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:47:10.918362  614803 logs.go:138] Found kubelet problem: Feb 26 11:46:27 addons-006797 kubelet[1354]: W0226 11:46:27.756941    1354 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006797" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006797' and this object
	W0226 11:47:10.918620  614803 logs.go:138] Found kubelet problem: Feb 26 11:46:27 addons-006797 kubelet[1354]: E0226 11:46:27.757003    1354 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006797" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006797' and this object
	I0226 11:47:10.953769  614803 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:47:10.953808  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0226 11:47:11.019102  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:11.107784  614803 logs.go:123] Gathering logs for kube-apiserver [04d4b11ac34d15fcc60a3b6bee6bf3355bcc03aa6d7fcdecdc9ec4aef9323da7] ...
	I0226 11:47:11.107819  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04d4b11ac34d15fcc60a3b6bee6bf3355bcc03aa6d7fcdecdc9ec4aef9323da7"
	I0226 11:47:11.185268  614803 logs.go:123] Gathering logs for kube-proxy [66663f67cb76bb86d3bebec7cfb039fc4d214c3b7eb5af78929fab27a7421d68] ...
	I0226 11:47:11.185356  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66663f67cb76bb86d3bebec7cfb039fc4d214c3b7eb5af78929fab27a7421d68"
	I0226 11:47:11.231789  614803 out.go:304] Setting ErrFile to fd 2...
	I0226 11:47:11.231813  614803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:47:11.231858  614803 out.go:239] X Problems detected in kubelet:
	W0226 11:47:11.231867  614803 out.go:239]   Feb 26 11:46:27 addons-006797 kubelet[1354]: W0226 11:46:27.756941    1354 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006797" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006797' and this object
	W0226 11:47:11.231875  614803 out.go:239]   Feb 26 11:46:27 addons-006797 kubelet[1354]: E0226 11:46:27.757003    1354 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006797" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006797' and this object
	I0226 11:47:11.231883  614803 out.go:304] Setting ErrFile to fd 2...
	I0226 11:47:11.231889  614803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:47:11.246090  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:11.317123  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:11.512181  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:11.746980  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:11.810882  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:12.025914  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:12.246418  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:12.307044  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:12.512685  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:12.747626  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:12.810108  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:13.015779  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:13.245925  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:13.306262  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:13.512054  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:13.746215  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:13.807736  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:14.018370  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:14.246920  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:14.307006  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:14.512493  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:14.747057  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:14.808787  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:15.019697  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:15.245372  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:15.307032  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:15.512580  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:15.747219  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:15.806860  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:16.016163  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:16.246845  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:16.326201  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:16.512967  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:16.754087  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:16.808557  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:17.064146  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:17.246220  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:17.306469  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:17.514317  614803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0226 11:47:17.746693  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:17.811514  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:18.024990  614803 kapi.go:107] duration metric: took 1m19.519465864s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0226 11:47:18.246094  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:18.308105  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:18.748837  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:18.856149  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:19.247040  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:19.309027  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:19.752099  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:19.818647  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:20.246317  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:20.311073  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:20.745606  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0226 11:47:20.806106  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:21.233679  614803 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0226 11:47:21.243420  614803 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0226 11:47:21.245416  614803 api_server.go:141] control plane version: v1.28.4
	I0226 11:47:21.245481  614803 api_server.go:131] duration metric: took 11.393055013s to wait for apiserver health ...
	I0226 11:47:21.245519  614803 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 11:47:21.245558  614803 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0226 11:47:21.245642  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0226 11:47:21.257121  614803 kapi.go:107] duration metric: took 1m19.015273379s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0226 11:47:21.259635  614803 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-006797 cluster.
	I0226 11:47:21.261644  614803 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0226 11:47:21.263564  614803 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0226 11:47:21.309153  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:21.316781  614803 cri.go:89] found id: "04d4b11ac34d15fcc60a3b6bee6bf3355bcc03aa6d7fcdecdc9ec4aef9323da7"
	I0226 11:47:21.316805  614803 cri.go:89] found id: ""
	I0226 11:47:21.316812  614803 logs.go:276] 1 containers: [04d4b11ac34d15fcc60a3b6bee6bf3355bcc03aa6d7fcdecdc9ec4aef9323da7]
	I0226 11:47:21.316873  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:47:21.331696  614803 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0226 11:47:21.331838  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0226 11:47:21.431270  614803 cri.go:89] found id: "e9505d37afa43999407f50624df81d16bb3d9c65f9d4e68fdb2d850d71cbdd8e"
	I0226 11:47:21.431344  614803 cri.go:89] found id: ""
	I0226 11:47:21.431375  614803 logs.go:276] 1 containers: [e9505d37afa43999407f50624df81d16bb3d9c65f9d4e68fdb2d850d71cbdd8e]
	I0226 11:47:21.431462  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:47:21.435699  614803 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0226 11:47:21.435822  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0226 11:47:21.514051  614803 cri.go:89] found id: "be0efb2580c536cde20c9dde56a98927c52fde83af8af15f34bbc14153b97e41"
	I0226 11:47:21.514130  614803 cri.go:89] found id: ""
	I0226 11:47:21.514153  614803 logs.go:276] 1 containers: [be0efb2580c536cde20c9dde56a98927c52fde83af8af15f34bbc14153b97e41]
	I0226 11:47:21.514245  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:47:21.518585  614803 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0226 11:47:21.518722  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0226 11:47:21.619241  614803 cri.go:89] found id: "48715769dded4613361b91993bb07e4ea5ded8a791ca6aeb3c58840e3064365e"
	I0226 11:47:21.619313  614803 cri.go:89] found id: ""
	I0226 11:47:21.619335  614803 logs.go:276] 1 containers: [48715769dded4613361b91993bb07e4ea5ded8a791ca6aeb3c58840e3064365e]
	I0226 11:47:21.619433  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:47:21.623551  614803 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0226 11:47:21.623675  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0226 11:47:21.689507  614803 cri.go:89] found id: "66663f67cb76bb86d3bebec7cfb039fc4d214c3b7eb5af78929fab27a7421d68"
	I0226 11:47:21.689578  614803 cri.go:89] found id: ""
	I0226 11:47:21.689609  614803 logs.go:276] 1 containers: [66663f67cb76bb86d3bebec7cfb039fc4d214c3b7eb5af78929fab27a7421d68]
	I0226 11:47:21.689701  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:47:21.695574  614803 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0226 11:47:21.695696  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0226 11:47:21.743298  614803 cri.go:89] found id: "0bd9d07570c891f4b4769ab8e5adf03a13f9f164ef8538d6424930f60a41d810"
	I0226 11:47:21.743391  614803 cri.go:89] found id: ""
	I0226 11:47:21.743414  614803 logs.go:276] 1 containers: [0bd9d07570c891f4b4769ab8e5adf03a13f9f164ef8538d6424930f60a41d810]
	I0226 11:47:21.743504  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:47:21.747170  614803 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0226 11:47:21.747284  614803 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0226 11:47:21.801167  614803 cri.go:89] found id: "11c57a321f69b62844de5ad0e3677c6e80e99a92eec6d85538f7f35e5895e787"
	I0226 11:47:21.801235  614803 cri.go:89] found id: ""
	I0226 11:47:21.801256  614803 logs.go:276] 1 containers: [11c57a321f69b62844de5ad0e3677c6e80e99a92eec6d85538f7f35e5895e787]
	I0226 11:47:21.801350  614803 ssh_runner.go:195] Run: which crictl
	I0226 11:47:21.804958  614803 logs.go:123] Gathering logs for kube-scheduler [48715769dded4613361b91993bb07e4ea5ded8a791ca6aeb3c58840e3064365e] ...
	I0226 11:47:21.804980  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48715769dded4613361b91993bb07e4ea5ded8a791ca6aeb3c58840e3064365e"
	I0226 11:47:21.830956  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:21.856822  614803 logs.go:123] Gathering logs for kube-proxy [66663f67cb76bb86d3bebec7cfb039fc4d214c3b7eb5af78929fab27a7421d68] ...
	I0226 11:47:21.856898  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66663f67cb76bb86d3bebec7cfb039fc4d214c3b7eb5af78929fab27a7421d68"
	I0226 11:47:21.911333  614803 logs.go:123] Gathering logs for kube-controller-manager [0bd9d07570c891f4b4769ab8e5adf03a13f9f164ef8538d6424930f60a41d810] ...
	I0226 11:47:21.911415  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0bd9d07570c891f4b4769ab8e5adf03a13f9f164ef8538d6424930f60a41d810"
	I0226 11:47:21.989913  614803 logs.go:123] Gathering logs for kindnet [11c57a321f69b62844de5ad0e3677c6e80e99a92eec6d85538f7f35e5895e787] ...
	I0226 11:47:21.989948  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c57a321f69b62844de5ad0e3677c6e80e99a92eec6d85538f7f35e5895e787"
	I0226 11:47:22.073211  614803 logs.go:123] Gathering logs for dmesg ...
	I0226 11:47:22.073240  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 11:47:22.105677  614803 logs.go:123] Gathering logs for describe nodes ...
	I0226 11:47:22.105704  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0226 11:47:22.299166  614803 logs.go:123] Gathering logs for kube-apiserver [04d4b11ac34d15fcc60a3b6bee6bf3355bcc03aa6d7fcdecdc9ec4aef9323da7] ...
	I0226 11:47:22.299197  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04d4b11ac34d15fcc60a3b6bee6bf3355bcc03aa6d7fcdecdc9ec4aef9323da7"
	I0226 11:47:22.307024  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:22.360615  614803 logs.go:123] Gathering logs for etcd [e9505d37afa43999407f50624df81d16bb3d9c65f9d4e68fdb2d850d71cbdd8e] ...
	I0226 11:47:22.360650  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9505d37afa43999407f50624df81d16bb3d9c65f9d4e68fdb2d850d71cbdd8e"
	I0226 11:47:22.411266  614803 logs.go:123] Gathering logs for CRI-O ...
	I0226 11:47:22.411304  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0226 11:47:22.506573  614803 logs.go:123] Gathering logs for container status ...
	I0226 11:47:22.506652  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 11:47:22.575072  614803 logs.go:123] Gathering logs for kubelet ...
	I0226 11:47:22.575105  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0226 11:47:22.640400  614803 logs.go:138] Found kubelet problem: Feb 26 11:46:27 addons-006797 kubelet[1354]: W0226 11:46:27.756941    1354 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006797" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006797' and this object
	W0226 11:47:22.640633  614803 logs.go:138] Found kubelet problem: Feb 26 11:46:27 addons-006797 kubelet[1354]: E0226 11:46:27.757003    1354 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006797" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006797' and this object
	I0226 11:47:22.680638  614803 logs.go:123] Gathering logs for coredns [be0efb2580c536cde20c9dde56a98927c52fde83af8af15f34bbc14153b97e41] ...
	I0226 11:47:22.680696  614803 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be0efb2580c536cde20c9dde56a98927c52fde83af8af15f34bbc14153b97e41"
	I0226 11:47:22.720384  614803 out.go:304] Setting ErrFile to fd 2...
	I0226 11:47:22.720417  614803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0226 11:47:22.720490  614803 out.go:239] X Problems detected in kubelet:
	W0226 11:47:22.720506  614803 out.go:239]   Feb 26 11:46:27 addons-006797 kubelet[1354]: W0226 11:46:27.756941    1354 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006797" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006797' and this object
	W0226 11:47:22.720539  614803 out.go:239]   Feb 26 11:46:27 addons-006797 kubelet[1354]: E0226 11:46:27.757003    1354 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-006797" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-006797' and this object
	I0226 11:47:22.720553  614803 out.go:304] Setting ErrFile to fd 2...
	I0226 11:47:22.720569  614803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:47:22.806426  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:23.306534  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:23.808164  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:24.306472  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:24.805763  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:25.306033  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:25.807598  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:26.309022  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:26.805996  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:27.310118  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:27.805919  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:28.305342  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:28.806680  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:29.306551  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:29.805451  614803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0226 11:47:30.306529  614803 kapi.go:107] duration metric: took 1m31.506709405s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0226 11:47:30.308729  614803 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, default-storageclass, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0226 11:47:30.310858  614803 addons.go:505] enable addons completed in 1m38.959360604s: enabled=[cloud-spanner nvidia-device-plugin ingress-dns default-storageclass storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0226 11:47:32.733638  614803 system_pods.go:59] 18 kube-system pods found
	I0226 11:47:32.733674  614803 system_pods.go:61] "coredns-5dd5756b68-2n2bn" [112eca67-d522-4469-aa25-7d8fe877b932] Running
	I0226 11:47:32.733681  614803 system_pods.go:61] "csi-hostpath-attacher-0" [9790ca86-b2a9-461b-96d7-17e9d31a9eaa] Running
	I0226 11:47:32.733685  614803 system_pods.go:61] "csi-hostpath-resizer-0" [7d064a7d-0f4e-45b7-9c4f-566fe7091f12] Running
	I0226 11:47:32.733689  614803 system_pods.go:61] "csi-hostpathplugin-sn8lq" [66dd4782-8001-42a0-8f86-9ef0e8ffa06d] Running
	I0226 11:47:32.733693  614803 system_pods.go:61] "etcd-addons-006797" [39768eec-5fb2-4159-bbf1-e02c4005bc30] Running
	I0226 11:47:32.733697  614803 system_pods.go:61] "kindnet-nl588" [69d0a8ae-04a9-4a5a-9dbd-714946e4fac6] Running
	I0226 11:47:32.733701  614803 system_pods.go:61] "kube-apiserver-addons-006797" [61e46e14-c4d1-4a07-ac93-e4444cae6268] Running
	I0226 11:47:32.733705  614803 system_pods.go:61] "kube-controller-manager-addons-006797" [e13a594f-c72c-4af1-abde-9d078c1b6500] Running
	I0226 11:47:32.733715  614803 system_pods.go:61] "kube-ingress-dns-minikube" [fa3b3e84-32d1-450e-b388-d26586802798] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0226 11:47:32.733724  614803 system_pods.go:61] "kube-proxy-5xmt7" [6d3d6175-90cf-4c3d-b744-cace07cd5400] Running
	I0226 11:47:32.733729  614803 system_pods.go:61] "kube-scheduler-addons-006797" [f7c2944a-1dd6-4a05-adeb-bf8e2deddadb] Running
	I0226 11:47:32.733736  614803 system_pods.go:61] "metrics-server-69cf46c98-4r7kf" [67e2b4c0-08e4-47f1-8f49-27fe813e4211] Running
	I0226 11:47:32.733740  614803 system_pods.go:61] "nvidia-device-plugin-daemonset-z98fk" [30c9a229-be92-41ee-a430-6170767b3979] Running
	I0226 11:47:32.733744  614803 system_pods.go:61] "registry-fcrhw" [b5614868-d4d9-4a9d-b64e-b828d191ec44] Running
	I0226 11:47:32.733747  614803 system_pods.go:61] "registry-proxy-qwsks" [f04cec61-ef11-4852-919b-e0bc55dd9118] Running
	I0226 11:47:32.733751  614803 system_pods.go:61] "snapshot-controller-58dbcc7b99-lc7mv" [15a78518-d05d-4a44-b9d1-6908aa0729b8] Running
	I0226 11:47:32.733755  614803 system_pods.go:61] "snapshot-controller-58dbcc7b99-t82mz" [0520f5d8-3f1a-4414-b7f1-396d507bcd11] Running
	I0226 11:47:32.733766  614803 system_pods.go:61] "storage-provisioner" [4949265a-5820-46dc-8908-3343b393d939] Running
	I0226 11:47:32.733773  614803 system_pods.go:74] duration metric: took 11.488231714s to wait for pod list to return data ...
	I0226 11:47:32.733782  614803 default_sa.go:34] waiting for default service account to be created ...
	I0226 11:47:32.737290  614803 default_sa.go:45] found service account: "default"
	I0226 11:47:32.737316  614803 default_sa.go:55] duration metric: took 3.524377ms for default service account to be created ...
	I0226 11:47:32.737326  614803 system_pods.go:116] waiting for k8s-apps to be running ...
	I0226 11:47:32.748629  614803 system_pods.go:86] 18 kube-system pods found
	I0226 11:47:32.748689  614803 system_pods.go:89] "coredns-5dd5756b68-2n2bn" [112eca67-d522-4469-aa25-7d8fe877b932] Running
	I0226 11:47:32.748698  614803 system_pods.go:89] "csi-hostpath-attacher-0" [9790ca86-b2a9-461b-96d7-17e9d31a9eaa] Running
	I0226 11:47:32.748709  614803 system_pods.go:89] "csi-hostpath-resizer-0" [7d064a7d-0f4e-45b7-9c4f-566fe7091f12] Running
	I0226 11:47:32.748714  614803 system_pods.go:89] "csi-hostpathplugin-sn8lq" [66dd4782-8001-42a0-8f86-9ef0e8ffa06d] Running
	I0226 11:47:32.748719  614803 system_pods.go:89] "etcd-addons-006797" [39768eec-5fb2-4159-bbf1-e02c4005bc30] Running
	I0226 11:47:32.748725  614803 system_pods.go:89] "kindnet-nl588" [69d0a8ae-04a9-4a5a-9dbd-714946e4fac6] Running
	I0226 11:47:32.748736  614803 system_pods.go:89] "kube-apiserver-addons-006797" [61e46e14-c4d1-4a07-ac93-e4444cae6268] Running
	I0226 11:47:32.748741  614803 system_pods.go:89] "kube-controller-manager-addons-006797" [e13a594f-c72c-4af1-abde-9d078c1b6500] Running
	I0226 11:47:32.748756  614803 system_pods.go:89] "kube-ingress-dns-minikube" [fa3b3e84-32d1-450e-b388-d26586802798] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0226 11:47:32.748762  614803 system_pods.go:89] "kube-proxy-5xmt7" [6d3d6175-90cf-4c3d-b744-cace07cd5400] Running
	I0226 11:47:32.748774  614803 system_pods.go:89] "kube-scheduler-addons-006797" [f7c2944a-1dd6-4a05-adeb-bf8e2deddadb] Running
	I0226 11:47:32.748783  614803 system_pods.go:89] "metrics-server-69cf46c98-4r7kf" [67e2b4c0-08e4-47f1-8f49-27fe813e4211] Running
	I0226 11:47:32.748788  614803 system_pods.go:89] "nvidia-device-plugin-daemonset-z98fk" [30c9a229-be92-41ee-a430-6170767b3979] Running
	I0226 11:47:32.748793  614803 system_pods.go:89] "registry-fcrhw" [b5614868-d4d9-4a9d-b64e-b828d191ec44] Running
	I0226 11:47:32.748799  614803 system_pods.go:89] "registry-proxy-qwsks" [f04cec61-ef11-4852-919b-e0bc55dd9118] Running
	I0226 11:47:32.748808  614803 system_pods.go:89] "snapshot-controller-58dbcc7b99-lc7mv" [15a78518-d05d-4a44-b9d1-6908aa0729b8] Running
	I0226 11:47:32.748817  614803 system_pods.go:89] "snapshot-controller-58dbcc7b99-t82mz" [0520f5d8-3f1a-4414-b7f1-396d507bcd11] Running
	I0226 11:47:32.748822  614803 system_pods.go:89] "storage-provisioner" [4949265a-5820-46dc-8908-3343b393d939] Running
	I0226 11:47:32.748828  614803 system_pods.go:126] duration metric: took 11.496646ms to wait for k8s-apps to be running ...
	I0226 11:47:32.748840  614803 system_svc.go:44] waiting for kubelet service to be running ....
	I0226 11:47:32.748908  614803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 11:47:32.763390  614803 system_svc.go:56] duration metric: took 14.538271ms WaitForService to wait for kubelet.
	I0226 11:47:32.763469  614803 kubeadm.go:581] duration metric: took 1m40.465725886s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0226 11:47:32.763510  614803 node_conditions.go:102] verifying NodePressure condition ...
	I0226 11:47:32.767869  614803 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0226 11:47:32.767906  614803 node_conditions.go:123] node cpu capacity is 2
	I0226 11:47:32.767921  614803 node_conditions.go:105] duration metric: took 4.393282ms to run NodePressure ...
	I0226 11:47:32.767940  614803 start.go:228] waiting for startup goroutines ...
	I0226 11:47:32.767949  614803 start.go:233] waiting for cluster config update ...
	I0226 11:47:32.767971  614803 start.go:242] writing updated cluster config ...
	I0226 11:47:32.768268  614803 ssh_runner.go:195] Run: rm -f paused
	I0226 11:47:33.111662  614803 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0226 11:47:33.114097  614803 out.go:177] * Done! kubectl is now configured to use "addons-006797" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 26 11:50:47 addons-006797 crio[894]: time="2024-02-26 11:50:47.940025432Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=d8809946-ed29-47cb-a4f4-52fb589840e2 name=/runtime.v1.ImageService/ImageStatus
	Feb 26 11:50:47 addons-006797 crio[894]: time="2024-02-26 11:50:47.940953422Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=7b9ec926-8f3e-49c4-b2c7-5100829f595f name=/runtime.v1.ImageService/ImageStatus
	Feb 26 11:50:47 addons-006797 crio[894]: time="2024-02-26 11:50:47.942792844Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=7b9ec926-8f3e-49c4-b2c7-5100829f595f name=/runtime.v1.ImageService/ImageStatus
	Feb 26 11:50:47 addons-006797 crio[894]: time="2024-02-26 11:50:47.943773698Z" level=info msg="Creating container: default/hello-world-app-5d77478584-s6mrr/hello-world-app" id=2aee1931-a93d-4123-90da-87b1d8b413ec name=/runtime.v1.RuntimeService/CreateContainer
	Feb 26 11:50:47 addons-006797 crio[894]: time="2024-02-26 11:50:47.943879442Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 26 11:50:48 addons-006797 crio[894]: time="2024-02-26 11:50:48.010622314Z" level=info msg="Created container c0df44a444e033d6bf5a9e20d657b9f816d08f5253af665415b34d215a4de20a: default/hello-world-app-5d77478584-s6mrr/hello-world-app" id=2aee1931-a93d-4123-90da-87b1d8b413ec name=/runtime.v1.RuntimeService/CreateContainer
	Feb 26 11:50:48 addons-006797 crio[894]: time="2024-02-26 11:50:48.011691469Z" level=info msg="Starting container: c0df44a444e033d6bf5a9e20d657b9f816d08f5253af665415b34d215a4de20a" id=b0e52fcc-dfea-48ef-a360-712181f57f32 name=/runtime.v1.RuntimeService/StartContainer
	Feb 26 11:50:48 addons-006797 crio[894]: time="2024-02-26 11:50:48.024526055Z" level=info msg="Started container" PID=8239 containerID=c0df44a444e033d6bf5a9e20d657b9f816d08f5253af665415b34d215a4de20a description=default/hello-world-app-5d77478584-s6mrr/hello-world-app id=b0e52fcc-dfea-48ef-a360-712181f57f32 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6964a9606a3f60a2970bb257bcfb9ee27b13206d863b004bcf4044295fa4091f
	Feb 26 11:50:48 addons-006797 conmon[8227]: conmon c0df44a444e033d6bf5a <ninfo>: container 8239 exited with status 1
	Feb 26 11:50:48 addons-006797 crio[894]: time="2024-02-26 11:50:48.494475183Z" level=info msg="Removing container: 7c3ba59c2b9f5444318de9f1ff3908ff19767183d6c876101bc51386f22ce989" id=50a45dac-40d1-427e-9d17-83ad01cd6845 name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 26 11:50:48 addons-006797 crio[894]: time="2024-02-26 11:50:48.514354260Z" level=info msg="Removed container 7c3ba59c2b9f5444318de9f1ff3908ff19767183d6c876101bc51386f22ce989: default/hello-world-app-5d77478584-s6mrr/hello-world-app" id=50a45dac-40d1-427e-9d17-83ad01cd6845 name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 26 11:50:49 addons-006797 crio[894]: time="2024-02-26 11:50:49.226255222Z" level=warning msg="Stopping container 06a1f3691dc6740333116b4f42470aed81c9824320e58916b1c8d59eef075fa5 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=505167a8-f688-447b-bc62-b7b9d3bc9ee0 name=/runtime.v1.RuntimeService/StopContainer
	Feb 26 11:50:49 addons-006797 conmon[4716]: conmon 06a1f3691dc674033311 <ninfo>: container 4727 exited with status 137
	Feb 26 11:50:49 addons-006797 crio[894]: time="2024-02-26 11:50:49.367605086Z" level=info msg="Stopped container 06a1f3691dc6740333116b4f42470aed81c9824320e58916b1c8d59eef075fa5: ingress-nginx/ingress-nginx-controller-7967645744-7wq44/controller" id=505167a8-f688-447b-bc62-b7b9d3bc9ee0 name=/runtime.v1.RuntimeService/StopContainer
	Feb 26 11:50:49 addons-006797 crio[894]: time="2024-02-26 11:50:49.368172839Z" level=info msg="Stopping pod sandbox: a79a1fa119a584b60bb7be29baf5ea44a08fe6a9c0cac42fc3f6e06f6b74f6c8" id=917a3bc3-b5f8-438d-8eac-370a753f7ea8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 26 11:50:49 addons-006797 crio[894]: time="2024-02-26 11:50:49.371824419Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-QZDKKQY6BY6JA3KF - [0:0]\n:KUBE-HP-DA7ZYC5EVSSJKGTE - [0:0]\n-X KUBE-HP-QZDKKQY6BY6JA3KF\n-X KUBE-HP-DA7ZYC5EVSSJKGTE\nCOMMIT\n"
	Feb 26 11:50:49 addons-006797 crio[894]: time="2024-02-26 11:50:49.373225503Z" level=info msg="Closing host port tcp:80"
	Feb 26 11:50:49 addons-006797 crio[894]: time="2024-02-26 11:50:49.373274026Z" level=info msg="Closing host port tcp:443"
	Feb 26 11:50:49 addons-006797 crio[894]: time="2024-02-26 11:50:49.374653901Z" level=info msg="Host port tcp:80 does not have an open socket"
	Feb 26 11:50:49 addons-006797 crio[894]: time="2024-02-26 11:50:49.374686072Z" level=info msg="Host port tcp:443 does not have an open socket"
	Feb 26 11:50:49 addons-006797 crio[894]: time="2024-02-26 11:50:49.374859450Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7967645744-7wq44 Namespace:ingress-nginx ID:a79a1fa119a584b60bb7be29baf5ea44a08fe6a9c0cac42fc3f6e06f6b74f6c8 UID:0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab NetNS:/var/run/netns/df5aeb26-8421-4aec-bbc9-1f78fa8a96af Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 26 11:50:49 addons-006797 crio[894]: time="2024-02-26 11:50:49.375010658Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7967645744-7wq44 from CNI network \"kindnet\" (type=ptp)"
	Feb 26 11:50:49 addons-006797 crio[894]: time="2024-02-26 11:50:49.403289099Z" level=info msg="Stopped pod sandbox: a79a1fa119a584b60bb7be29baf5ea44a08fe6a9c0cac42fc3f6e06f6b74f6c8" id=917a3bc3-b5f8-438d-8eac-370a753f7ea8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 26 11:50:49 addons-006797 crio[894]: time="2024-02-26 11:50:49.498556043Z" level=info msg="Removing container: 06a1f3691dc6740333116b4f42470aed81c9824320e58916b1c8d59eef075fa5" id=d3547ba1-12f2-4836-b782-ef3b06e48327 name=/runtime.v1.RuntimeService/RemoveContainer
	Feb 26 11:50:49 addons-006797 crio[894]: time="2024-02-26 11:50:49.515072033Z" level=info msg="Removed container 06a1f3691dc6740333116b4f42470aed81c9824320e58916b1c8d59eef075fa5: ingress-nginx/ingress-nginx-controller-7967645744-7wq44/controller" id=d3547ba1-12f2-4836-b782-ef3b06e48327 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c0df44a444e03       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             6 seconds ago       Exited              hello-world-app           2                   6964a9606a3f6       hello-world-app-5d77478584-s6mrr
	eb78eaedc6032       ghcr.io/headlamp-k8s/headlamp@sha256:0fe50c48c186b89ff3d341dba427174d8232a64c3062af5de854a3a7cb2105ce                        56 seconds ago      Running             headlamp                  0                   8018998e98c7e       headlamp-7ddfbb94ff-rf6wk
	a1d2ac29501a1       docker.io/library/nginx@sha256:34aa0a372d3220dc0448131f809c72d8085f79bdec8058ad6970fc034a395674                              2 minutes ago       Running             nginx                     0                   e6c5716ec9c90       nginx
	fdeb0a86aaf26       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                 3 minutes ago       Running             gcp-auth                  0                   3cc1c9ec740e8       gcp-auth-5f6b4f85fd-m7czp
	5ee5a58d1cb3a       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   12674db58a664       yakd-dashboard-9947fc6bf-hs94l
	68e87b41c55cd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:25d6a5f11211cc5c3f9f2bf552b585374af287b4debf693cacbe2da47daa5084   4 minutes ago       Exited              patch                     0                   5f1b1b6100795       ingress-nginx-admission-patch-jpkvm
	ceb60edbe7287       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:25d6a5f11211cc5c3f9f2bf552b585374af287b4debf693cacbe2da47daa5084   4 minutes ago       Exited              create                    0                   0b6184ca9b771       ingress-nginx-admission-create-m6wqw
	be0efb2580c53       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago       Running             coredns                   0                   d2cc35e8cd76b       coredns-5dd5756b68-2n2bn
	1d879afb978da       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago       Running             storage-provisioner       0                   bf242a6cb3042       storage-provisioner
	11c57a321f69b       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988                           4 minutes ago       Running             kindnet-cni               0                   24799bc32eddc       kindnet-nl588
	66663f67cb76b       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                                             5 minutes ago       Running             kube-proxy                0                   97b0f5c064cdb       kube-proxy-5xmt7
	0bd9d07570c89       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                                             5 minutes ago       Running             kube-controller-manager   0                   5390c24dad35f       kube-controller-manager-addons-006797
	48715769dded4       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                                             5 minutes ago       Running             kube-scheduler            0                   788baea46c7d9       kube-scheduler-addons-006797
	04d4b11ac34d1       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                                             5 minutes ago       Running             kube-apiserver            0                   71a56981de6f6       kube-apiserver-addons-006797
	e9505d37afa43       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             5 minutes ago       Running             etcd                      0                   bae4dbe14c615       etcd-addons-006797
	
	
	==> coredns [be0efb2580c536cde20c9dde56a98927c52fde83af8af15f34bbc14153b97e41] <==
	[INFO] 10.244.0.19:36940 - 6816 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055474s
	[INFO] 10.244.0.19:36940 - 32892 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074369s
	[INFO] 10.244.0.19:36940 - 3991 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053849s
	[INFO] 10.244.0.19:36940 - 40967 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000071621s
	[INFO] 10.244.0.19:36940 - 5553 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001326018s
	[INFO] 10.244.0.19:36940 - 39611 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001016603s
	[INFO] 10.244.0.19:36940 - 12386 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000063145s
	[INFO] 10.244.0.19:52840 - 4897 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000100559s
	[INFO] 10.244.0.19:57241 - 41176 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059395s
	[INFO] 10.244.0.19:52840 - 53550 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000058772s
	[INFO] 10.244.0.19:57241 - 57781 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000048688s
	[INFO] 10.244.0.19:52840 - 4647 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044306s
	[INFO] 10.244.0.19:57241 - 49390 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000071719s
	[INFO] 10.244.0.19:52840 - 26204 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055408s
	[INFO] 10.244.0.19:57241 - 33433 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040983s
	[INFO] 10.244.0.19:52840 - 50071 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043683s
	[INFO] 10.244.0.19:57241 - 59738 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037791s
	[INFO] 10.244.0.19:52840 - 58020 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060536s
	[INFO] 10.244.0.19:57241 - 19152 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047769s
	[INFO] 10.244.0.19:52840 - 60016 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00181895s
	[INFO] 10.244.0.19:57241 - 16263 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.0022135s
	[INFO] 10.244.0.19:52840 - 24141 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001403217s
	[INFO] 10.244.0.19:52840 - 278 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000087661s
	[INFO] 10.244.0.19:57241 - 48386 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.0011584s
	[INFO] 10.244.0.19:57241 - 61292 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069946s
	
	
	==> describe nodes <==
	Name:               addons-006797
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-006797
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4011915ad0e9b27ff42994854397cc2ed93516c6
	                    minikube.k8s.io/name=addons-006797
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_26T11_45_39_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-006797
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Feb 2024 11:45:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-006797
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Feb 2024 11:50:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Feb 2024 11:50:44 +0000   Mon, 26 Feb 2024 11:45:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Feb 2024 11:50:44 +0000   Mon, 26 Feb 2024 11:45:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Feb 2024 11:50:44 +0000   Mon, 26 Feb 2024 11:45:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Feb 2024 11:50:44 +0000   Mon, 26 Feb 2024 11:46:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-006797
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 0eddcd00c18745ae86e1427e2d0ed8f1
	  System UUID:                d1c664eb-683e-4bda-97f6-0664891b1869
	  Boot ID:                    18acc680-2ad9-4339-83b8-bdf83df5c458
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-s6mrr         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  gcp-auth                    gcp-auth-5f6b4f85fd-m7czp                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  headlamp                    headlamp-7ddfbb94ff-rf6wk                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 coredns-5dd5756b68-2n2bn                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m3s
	  kube-system                 etcd-addons-006797                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m16s
	  kube-system                 kindnet-nl588                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m4s
	  kube-system                 kube-apiserver-addons-006797             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-controller-manager-addons-006797    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-proxy-5xmt7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-scheduler-addons-006797             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-hs94l           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     4m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m56s                  kube-proxy       
	  Normal  Starting                 5m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m24s (x4 over 5m24s)  kubelet          Node addons-006797 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x4 over 5m24s)  kubelet          Node addons-006797 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x3 over 5m24s)  kubelet          Node addons-006797 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m16s                  kubelet          Node addons-006797 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s                  kubelet          Node addons-006797 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s                  kubelet          Node addons-006797 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m4s                   node-controller  Node addons-006797 event: Registered Node addons-006797 in Controller
	  Normal  NodeReady                4m27s                  kubelet          Node addons-006797 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001079] FS-Cache: N-key=[8] '54e4c90000000000'
	[  +0.005303] FS-Cache: Duplicate cookie detected
	[  +0.000781] FS-Cache: O-cookie c=000000cc [p=000000c9 fl=226 nc=0 na=1]
	[  +0.001013] FS-Cache: O-cookie d=00000000841d89c0{9p.inode} n=00000000d54607c6
	[  +0.001135] FS-Cache: O-key=[8] '54e4c90000000000'
	[  +0.000765] FS-Cache: N-cookie c=000000d3 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000991] FS-Cache: N-cookie d=00000000841d89c0{9p.inode} n=00000000a0819983
	[  +0.001115] FS-Cache: N-key=[8] '54e4c90000000000'
	[  +2.690797] FS-Cache: Duplicate cookie detected
	[  +0.000712] FS-Cache: O-cookie c=000000ca [p=000000c9 fl=226 nc=0 na=1]
	[  +0.001072] FS-Cache: O-cookie d=00000000841d89c0{9p.inode} n=00000000c062c36f
	[  +0.001226] FS-Cache: O-key=[8] '53e4c90000000000'
	[  +0.000782] FS-Cache: N-cookie c=000000d5 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000951] FS-Cache: N-cookie d=00000000841d89c0{9p.inode} n=00000000de533f6b
	[  +0.001046] FS-Cache: N-key=[8] '53e4c90000000000'
	[  +0.405662] FS-Cache: Duplicate cookie detected
	[  +0.000991] FS-Cache: O-cookie c=000000cf [p=000000c9 fl=226 nc=0 na=1]
	[  +0.001232] FS-Cache: O-cookie d=00000000841d89c0{9p.inode} n=00000000e6eb4eb1
	[  +0.001319] FS-Cache: O-key=[8] '59e4c90000000000'
	[  +0.000832] FS-Cache: N-cookie c=000000d6 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.001181] FS-Cache: N-cookie d=00000000841d89c0{9p.inode} n=000000004c5f50b5
	[  +0.001298] FS-Cache: N-key=[8] '59e4c90000000000'
	[Feb26 10:58] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.010064] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.006530] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	
	
	==> etcd [e9505d37afa43999407f50624df81d16bb3d9c65f9d4e68fdb2d850d71cbdd8e] <==
	{"level":"info","ts":"2024-02-26T11:45:31.636743Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-26T11:45:31.637064Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-006797 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-26T11:45:31.637125Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T11:45:31.638176Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-26T11:45:31.638427Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T11:45:31.63933Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-26T11:45:31.639756Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-26T11:45:31.639808Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-26T11:45:31.660778Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-26T11:45:31.660944Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-26T11:45:31.661008Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-26T11:45:54.991898Z","caller":"traceutil/trace.go:171","msg":"trace[1810075446] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"213.89698ms","start":"2024-02-26T11:45:54.777983Z","end":"2024-02-26T11:45:54.99188Z","steps":["trace[1810075446] 'process raft request'  (duration: 145.076052ms)","trace[1810075446] 'compare'  (duration: 68.739724ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-26T11:45:55.19299Z","caller":"traceutil/trace.go:171","msg":"trace[1942363881] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"135.846881ms","start":"2024-02-26T11:45:55.057125Z","end":"2024-02-26T11:45:55.192972Z","steps":["trace[1942363881] 'process raft request'  (duration: 67.737932ms)","trace[1942363881] 'compare'  (duration: 67.833331ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-26T11:45:55.193221Z","caller":"traceutil/trace.go:171","msg":"trace[274694754] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"171.668881ms","start":"2024-02-26T11:45:55.021545Z","end":"2024-02-26T11:45:55.193214Z","steps":["trace[274694754] 'process raft request'  (duration: 171.242482ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-26T11:45:56.143521Z","caller":"traceutil/trace.go:171","msg":"trace[829673540] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"128.035662ms","start":"2024-02-26T11:45:56.015461Z","end":"2024-02-26T11:45:56.143497Z","steps":["trace[829673540] 'process raft request'  (duration: 127.931805ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-26T11:45:56.167583Z","caller":"traceutil/trace.go:171","msg":"trace[911873094] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"151.633298ms","start":"2024-02-26T11:45:56.015737Z","end":"2024-02-26T11:45:56.16737Z","steps":["trace[911873094] 'process raft request'  (duration: 151.529071ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-26T11:45:56.905065Z","caller":"traceutil/trace.go:171","msg":"trace[50577801] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"110.207027ms","start":"2024-02-26T11:45:56.792368Z","end":"2024-02-26T11:45:56.902575Z","steps":["trace[50577801] 'process raft request'  (duration: 109.811068ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-26T11:45:56.910009Z","caller":"traceutil/trace.go:171","msg":"trace[1096809463] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"148.204584ms","start":"2024-02-26T11:45:56.761787Z","end":"2024-02-26T11:45:56.909991Z","steps":["trace[1096809463] 'process raft request'  (duration: 84.300595ms)","trace[1096809463] 'compare'  (duration: 55.868701ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-26T11:45:56.920278Z","caller":"traceutil/trace.go:171","msg":"trace[539879162] linearizableReadLoop","detail":"{readStateIndex:449; appliedIndex:448; }","duration":"131.672222ms","start":"2024-02-26T11:45:56.788565Z","end":"2024-02-26T11:45:56.920237Z","steps":["trace[539879162] 'read index received'  (duration: 3.080088ms)","trace[539879162] 'applied index is now lower than readState.Index'  (duration: 128.58895ms)"],"step_count":2}
	{"level":"warn","ts":"2024-02-26T11:45:56.920387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.815865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-26T11:45:56.923355Z","caller":"traceutil/trace.go:171","msg":"trace[962266602] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:446; }","duration":"134.806594ms","start":"2024-02-26T11:45:56.788535Z","end":"2024-02-26T11:45:56.923341Z","steps":["trace[962266602] 'agreement among raft nodes before linearized reading'  (duration: 131.779682ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-26T11:45:56.935219Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.428317ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:79111"}
	{"level":"info","ts":"2024-02-26T11:45:56.93528Z","caller":"traceutil/trace.go:171","msg":"trace[289544266] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:447; }","duration":"143.499683ms","start":"2024-02-26T11:45:56.79177Z","end":"2024-02-26T11:45:56.935269Z","steps":["trace[289544266] 'agreement among raft nodes before linearized reading'  (duration: 143.368684ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-26T11:45:56.93604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.109053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-26T11:45:56.936083Z","caller":"traceutil/trace.go:171","msg":"trace[1976592834] range","detail":"{range_begin:/registry/clusterrolebindings/storage-provisioner; range_end:; response_count:0; response_revision:447; }","duration":"133.156477ms","start":"2024-02-26T11:45:56.802918Z","end":"2024-02-26T11:45:56.936074Z","steps":["trace[1976592834] 'agreement among raft nodes before linearized reading'  (duration: 133.096368ms)"],"step_count":1}
	
	
	==> gcp-auth [fdeb0a86aaf261e1e3307d06d0734e2cb498a8dc5822c4a9bef982dd1dd44982] <==
	2024/02/26 11:47:20 GCP Auth Webhook started!
	2024/02/26 11:47:44 Ready to marshal response ...
	2024/02/26 11:47:44 Ready to write response ...
	2024/02/26 11:48:01 Ready to marshal response ...
	2024/02/26 11:48:01 Ready to write response ...
	2024/02/26 11:48:08 Ready to marshal response ...
	2024/02/26 11:48:08 Ready to write response ...
	2024/02/26 11:48:23 Ready to marshal response ...
	2024/02/26 11:48:23 Ready to write response ...
	2024/02/26 11:48:53 Ready to marshal response ...
	2024/02/26 11:48:53 Ready to write response ...
	2024/02/26 11:48:53 Ready to marshal response ...
	2024/02/26 11:48:53 Ready to write response ...
	2024/02/26 11:49:02 Ready to marshal response ...
	2024/02/26 11:49:02 Ready to write response ...
	2024/02/26 11:49:54 Ready to marshal response ...
	2024/02/26 11:49:54 Ready to write response ...
	2024/02/26 11:49:54 Ready to marshal response ...
	2024/02/26 11:49:54 Ready to write response ...
	2024/02/26 11:49:54 Ready to marshal response ...
	2024/02/26 11:49:54 Ready to write response ...
	2024/02/26 11:50:28 Ready to marshal response ...
	2024/02/26 11:50:28 Ready to write response ...
	
	
	==> kernel <==
	 11:50:54 up 1 day, 33 min,  0 users,  load average: 0.34, 1.02, 1.17
	Linux addons-006797 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [11c57a321f69b62844de5ad0e3677c6e80e99a92eec6d85538f7f35e5895e787] <==
	I0226 11:48:47.690983       1 main.go:227] handling current node
	I0226 11:48:57.694960       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:48:57.694986       1 main.go:227] handling current node
	I0226 11:49:07.699247       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:49:07.699275       1 main.go:227] handling current node
	I0226 11:49:17.710030       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:49:17.710056       1 main.go:227] handling current node
	I0226 11:49:27.720311       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:49:27.720339       1 main.go:227] handling current node
	I0226 11:49:37.724178       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:49:37.724204       1 main.go:227] handling current node
	I0226 11:49:47.735559       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:49:47.735587       1 main.go:227] handling current node
	I0226 11:49:57.741291       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:49:57.741711       1 main.go:227] handling current node
	I0226 11:50:07.757586       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:50:07.757622       1 main.go:227] handling current node
	I0226 11:50:17.769171       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:50:17.769200       1 main.go:227] handling current node
	I0226 11:50:27.781731       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:50:27.781756       1 main.go:227] handling current node
	I0226 11:50:37.794470       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:50:37.794503       1 main.go:227] handling current node
	I0226 11:50:47.799251       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:50:47.799277       1 main.go:227] handling current node
	
	
	==> kube-apiserver [04d4b11ac34d15fcc60a3b6bee6bf3355bcc03aa6d7fcdecdc9ec4aef9323da7] <==
	I0226 11:48:39.858551       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0226 11:48:39.877774       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0226 11:48:39.878392       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0226 11:48:39.884637       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0226 11:48:39.884724       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0226 11:48:39.907270       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0226 11:48:39.907319       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0226 11:48:39.907785       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0226 11:48:39.907828       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0226 11:48:39.991328       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0226 11:48:39.991486       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0226 11:48:40.017375       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0226 11:48:40.017447       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0226 11:48:40.885755       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0226 11:48:41.017378       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0226 11:48:41.036965       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0226 11:48:52.433594       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0226 11:49:03.756550       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0226 11:49:03.759800       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0226 11:49:03.763262       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0226 11:49:18.763812       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0226 11:49:54.035774       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.155.192"}
	I0226 11:50:28.830905       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.242.105"}
	E0226 11:50:46.268237       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0226 11:50:49.136254       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [0bd9d07570c891f4b4769ab8e5adf03a13f9f164ef8538d6424930f60a41d810] <==
	E0226 11:49:58.803377       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0226 11:50:04.195244       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0226 11:50:04.195286       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0226 11:50:04.258933       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0226 11:50:04.258967       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0226 11:50:28.540636       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0226 11:50:28.558452       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-s6mrr"
	I0226 11:50:28.573015       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.990308ms"
	I0226 11:50:28.595693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="22.531733ms"
	I0226 11:50:28.595880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.322µs"
	W0226 11:50:30.239346       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0226 11:50:30.239380       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0226 11:50:32.474021       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="48.885µs"
	I0226 11:50:33.483070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="106.393µs"
	W0226 11:50:34.336852       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0226 11:50:34.336885       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0226 11:50:34.474452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="99.935µs"
	W0226 11:50:39.957011       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0226 11:50:39.957043       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0226 11:50:39.968703       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0226 11:50:39.968733       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0226 11:50:46.193266       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0226 11:50:46.198579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7967645744" duration="95.455µs"
	I0226 11:50:46.199548       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0226 11:50:48.516294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="81.663µs"
	
	
	==> kube-proxy [66663f67cb76bb86d3bebec7cfb039fc4d214c3b7eb5af78929fab27a7421d68] <==
	I0226 11:45:57.617908       1 server_others.go:69] "Using iptables proxy"
	I0226 11:45:57.724575       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0226 11:45:57.783187       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0226 11:45:57.800804       1 server_others.go:152] "Using iptables Proxier"
	I0226 11:45:57.800926       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0226 11:45:57.800958       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0226 11:45:57.801035       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0226 11:45:57.801313       1 server.go:846] "Version info" version="v1.28.4"
	I0226 11:45:57.801525       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 11:45:57.802324       1 config.go:188] "Starting service config controller"
	I0226 11:45:57.802399       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0226 11:45:57.802444       1 config.go:97] "Starting endpoint slice config controller"
	I0226 11:45:57.802494       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0226 11:45:57.802982       1 config.go:315] "Starting node config controller"
	I0226 11:45:57.803045       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0226 11:45:57.905381       1 shared_informer.go:318] Caches are synced for node config
	I0226 11:45:57.917039       1 shared_informer.go:318] Caches are synced for service config
	I0226 11:45:57.926968       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [48715769dded4613361b91993bb07e4ea5ded8a791ca6aeb3c58840e3064365e] <==
	W0226 11:45:34.843014       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0226 11:45:34.843183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0226 11:45:34.843326       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0226 11:45:34.843396       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0226 11:45:34.843472       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0226 11:45:34.843508       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0226 11:45:34.843624       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0226 11:45:34.843690       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0226 11:45:34.843801       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0226 11:45:34.843855       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0226 11:45:34.843988       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0226 11:45:34.844046       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0226 11:45:34.844367       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0226 11:45:34.844461       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0226 11:45:35.691407       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0226 11:45:35.691550       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0226 11:45:35.736976       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0226 11:45:35.737094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0226 11:45:35.843862       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0226 11:45:35.843967       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0226 11:45:35.880845       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0226 11:45:35.880952       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0226 11:45:35.919016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0226 11:45:35.919122       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0226 11:45:36.329579       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 26 11:50:38 addons-006797 kubelet[1354]: E0226 11:50:38.204550    1354 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/86a8efc9b3630b2f8edeed9e65030af9ebc3f226e1c87d0170d3cb8f461b8c8e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/86a8efc9b3630b2f8edeed9e65030af9ebc3f226e1c87d0170d3cb8f461b8c8e/diff: no such file or directory, extraDiskErr: <nil>
	Feb 26 11:50:38 addons-006797 kubelet[1354]: E0226 11:50:38.205686    1354 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5f496482d8b279771a0a076fad9f0060875230bc340789025c70f4b28787581f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5f496482d8b279771a0a076fad9f0060875230bc340789025c70f4b28787581f/diff: no such file or directory, extraDiskErr: <nil>
	Feb 26 11:50:44 addons-006797 kubelet[1354]: E0226 11:50:44.799280    1354 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/29472b0c119db5ea77b74610ba0605180ee6538afec940c32bd6c266b6c023dd/diff" to get inode usage: stat /var/lib/containers/storage/overlay/29472b0c119db5ea77b74610ba0605180ee6538afec940c32bd6c266b6c023dd/diff: no such file or directory, extraDiskErr: <nil>
	Feb 26 11:50:44 addons-006797 kubelet[1354]: I0226 11:50:44.845527    1354 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bncn9\" (UniqueName: \"kubernetes.io/projected/fa3b3e84-32d1-450e-b388-d26586802798-kube-api-access-bncn9\") pod \"fa3b3e84-32d1-450e-b388-d26586802798\" (UID: \"fa3b3e84-32d1-450e-b388-d26586802798\") "
	Feb 26 11:50:44 addons-006797 kubelet[1354]: I0226 11:50:44.847634    1354 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa3b3e84-32d1-450e-b388-d26586802798-kube-api-access-bncn9" (OuterVolumeSpecName: "kube-api-access-bncn9") pod "fa3b3e84-32d1-450e-b388-d26586802798" (UID: "fa3b3e84-32d1-450e-b388-d26586802798"). InnerVolumeSpecName "kube-api-access-bncn9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 26 11:50:44 addons-006797 kubelet[1354]: I0226 11:50:44.946498    1354 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bncn9\" (UniqueName: \"kubernetes.io/projected/fa3b3e84-32d1-450e-b388-d26586802798-kube-api-access-bncn9\") on node \"addons-006797\" DevicePath \"\""
	Feb 26 11:50:45 addons-006797 kubelet[1354]: I0226 11:50:45.484754    1354 scope.go:117] "RemoveContainer" containerID="754767899a80cdde8c21fd54d8d004316cd6cdee90ea7cca7968340f71ac3673"
	Feb 26 11:50:45 addons-006797 kubelet[1354]: I0226 11:50:45.940100    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fa3b3e84-32d1-450e-b388-d26586802798" path="/var/lib/kubelet/pods/fa3b3e84-32d1-450e-b388-d26586802798/volumes"
	Feb 26 11:50:47 addons-006797 kubelet[1354]: I0226 11:50:47.939184    1354 scope.go:117] "RemoveContainer" containerID="7c3ba59c2b9f5444318de9f1ff3908ff19767183d6c876101bc51386f22ce989"
	Feb 26 11:50:47 addons-006797 kubelet[1354]: I0226 11:50:47.941640    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="38c75f23-8ac4-4bee-addb-afc43d4bb63a" path="/var/lib/kubelet/pods/38c75f23-8ac4-4bee-addb-afc43d4bb63a/volumes"
	Feb 26 11:50:47 addons-006797 kubelet[1354]: I0226 11:50:47.942038    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="69591941-79e7-4afc-ab32-18de70ac4068" path="/var/lib/kubelet/pods/69591941-79e7-4afc-ab32-18de70ac4068/volumes"
	Feb 26 11:50:48 addons-006797 kubelet[1354]: I0226 11:50:48.493005    1354 scope.go:117] "RemoveContainer" containerID="7c3ba59c2b9f5444318de9f1ff3908ff19767183d6c876101bc51386f22ce989"
	Feb 26 11:50:48 addons-006797 kubelet[1354]: I0226 11:50:48.493267    1354 scope.go:117] "RemoveContainer" containerID="c0df44a444e033d6bf5a9e20d657b9f816d08f5253af665415b34d215a4de20a"
	Feb 26 11:50:48 addons-006797 kubelet[1354]: E0226 11:50:48.493537    1354 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-s6mrr_default(831ae4ef-1730-4047-8c69-6d73acce82ff)\"" pod="default/hello-world-app-5d77478584-s6mrr" podUID="831ae4ef-1730-4047-8c69-6d73acce82ff"
	Feb 26 11:50:49 addons-006797 kubelet[1354]: I0226 11:50:49.497229    1354 scope.go:117] "RemoveContainer" containerID="06a1f3691dc6740333116b4f42470aed81c9824320e58916b1c8d59eef075fa5"
	Feb 26 11:50:49 addons-006797 kubelet[1354]: I0226 11:50:49.515368    1354 scope.go:117] "RemoveContainer" containerID="06a1f3691dc6740333116b4f42470aed81c9824320e58916b1c8d59eef075fa5"
	Feb 26 11:50:49 addons-006797 kubelet[1354]: E0226 11:50:49.515886    1354 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06a1f3691dc6740333116b4f42470aed81c9824320e58916b1c8d59eef075fa5\": container with ID starting with 06a1f3691dc6740333116b4f42470aed81c9824320e58916b1c8d59eef075fa5 not found: ID does not exist" containerID="06a1f3691dc6740333116b4f42470aed81c9824320e58916b1c8d59eef075fa5"
	Feb 26 11:50:49 addons-006797 kubelet[1354]: I0226 11:50:49.515941    1354 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06a1f3691dc6740333116b4f42470aed81c9824320e58916b1c8d59eef075fa5"} err="failed to get container status \"06a1f3691dc6740333116b4f42470aed81c9824320e58916b1c8d59eef075fa5\": rpc error: code = NotFound desc = could not find container \"06a1f3691dc6740333116b4f42470aed81c9824320e58916b1c8d59eef075fa5\": container with ID starting with 06a1f3691dc6740333116b4f42470aed81c9824320e58916b1c8d59eef075fa5 not found: ID does not exist"
	Feb 26 11:50:49 addons-006797 kubelet[1354]: I0226 11:50:49.574278    1354 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf5d6\" (UniqueName: \"kubernetes.io/projected/0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab-kube-api-access-zf5d6\") pod \"0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab\" (UID: \"0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab\") "
	Feb 26 11:50:49 addons-006797 kubelet[1354]: I0226 11:50:49.574350    1354 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab-webhook-cert\") pod \"0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab\" (UID: \"0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab\") "
	Feb 26 11:50:49 addons-006797 kubelet[1354]: I0226 11:50:49.576800    1354 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab" (UID: "0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 26 11:50:49 addons-006797 kubelet[1354]: I0226 11:50:49.581247    1354 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab-kube-api-access-zf5d6" (OuterVolumeSpecName: "kube-api-access-zf5d6") pod "0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab" (UID: "0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab"). InnerVolumeSpecName "kube-api-access-zf5d6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 26 11:50:49 addons-006797 kubelet[1354]: I0226 11:50:49.675454    1354 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zf5d6\" (UniqueName: \"kubernetes.io/projected/0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab-kube-api-access-zf5d6\") on node \"addons-006797\" DevicePath \"\""
	Feb 26 11:50:49 addons-006797 kubelet[1354]: I0226 11:50:49.675494    1354 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab-webhook-cert\") on node \"addons-006797\" DevicePath \"\""
	Feb 26 11:50:49 addons-006797 kubelet[1354]: I0226 11:50:49.940381    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab" path="/var/lib/kubelet/pods/0ddf6a71-af97-4fbb-88cb-0b1f1cc739ab/volumes"
	
	
	==> storage-provisioner [1d879afb978daf8b321028298667dd63a72bd28c19301e30f5863e4c9dd0ad2b] <==
	I0226 11:46:28.285317       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0226 11:46:28.316601       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0226 11:46:28.323822       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0226 11:46:28.346686       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0226 11:46:28.346977       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-006797_55787a09-dc21-4c82-a252-fc9762235c22!
	I0226 11:46:28.347941       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d62d83a9-9939-4e3c-885c-6fd900bffb20", APIVersion:"v1", ResourceVersion:"871", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-006797_55787a09-dc21-4c82-a252-fc9762235c22 became leader
	I0226 11:46:28.448940       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-006797_55787a09-dc21-4c82-a252-fc9762235c22!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-006797 -n addons-006797
helpers_test.go:261: (dbg) Run:  kubectl --context addons-006797 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.15s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (179.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-329029 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-329029 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.389604898s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-329029 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-329029 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [489b602e-220c-449a-9738-85a44f4bfa80] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [489b602e-220c-449a-9738-85a44f4bfa80] Running
E0226 11:57:33.167823  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.002928579s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-329029 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0226 11:58:00.853637  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 11:59:41.616105  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 11:59:41.621411  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 11:59:41.631722  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 11:59:41.651967  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 11:59:41.692219  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 11:59:41.772574  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 11:59:41.932989  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 11:59:42.253530  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 11:59:42.894158  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 11:59:44.174586  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 11:59:46.736541  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-329029 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.678843116s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-329029 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-329029 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0226 11:59:51.856831  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 12:00:02.097129  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.04738659s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-329029 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-329029 addons disable ingress-dns --alsologtostderr -v=1: (2.878110723s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-329029 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-329029 addons disable ingress --alsologtostderr -v=1: (7.529530674s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-329029
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-329029:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e3ca6f842678ea4a1b5d3ed62f30eaf5aa8d6a1c4999ccf0a7538a97411aa71",
	        "Created": "2024-02-26T11:55:55.987784596Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 642615,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T11:55:56.302572407Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83c2925cbfe44b87b5c0672f16807927d87d8625e89de4dc154c45daaaa04b5b",
	        "ResolvConfPath": "/var/lib/docker/containers/5e3ca6f842678ea4a1b5d3ed62f30eaf5aa8d6a1c4999ccf0a7538a97411aa71/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e3ca6f842678ea4a1b5d3ed62f30eaf5aa8d6a1c4999ccf0a7538a97411aa71/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e3ca6f842678ea4a1b5d3ed62f30eaf5aa8d6a1c4999ccf0a7538a97411aa71/hosts",
	        "LogPath": "/var/lib/docker/containers/5e3ca6f842678ea4a1b5d3ed62f30eaf5aa8d6a1c4999ccf0a7538a97411aa71/5e3ca6f842678ea4a1b5d3ed62f30eaf5aa8d6a1c4999ccf0a7538a97411aa71-json.log",
	        "Name": "/ingress-addon-legacy-329029",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-329029:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-329029",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/421a3f5d2b4587fddc513a3766fe6e12bc5d0be5048edf6a7971df66df294276-init/diff:/var/lib/docker/overlay2/f0e0da57c811333114b7a0181d8121ec20f9baacbcf19d34fad5038b1792b1cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/421a3f5d2b4587fddc513a3766fe6e12bc5d0be5048edf6a7971df66df294276/merged",
	                "UpperDir": "/var/lib/docker/overlay2/421a3f5d2b4587fddc513a3766fe6e12bc5d0be5048edf6a7971df66df294276/diff",
	                "WorkDir": "/var/lib/docker/overlay2/421a3f5d2b4587fddc513a3766fe6e12bc5d0be5048edf6a7971df66df294276/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-329029",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-329029/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-329029",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-329029",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-329029",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ae9b7e51e4a22855850c14157f20fe9952fac93fb00d7eb0e924d732b68c0ec5",
	            "SandboxKey": "/var/run/docker/netns/ae9b7e51e4a2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36816"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36815"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36814"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36813"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-329029": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5e3ca6f84267",
	                        "ingress-addon-legacy-329029"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "6613d4553895b05de4e301b3e57e7affbaae62299e65ac00aa2360ea2ab64069",
	                    "EndpointID": "87489f7149e8dbd068776cbd4ee8018ba277ac98f0a2cc08a5bebb3f225e36f7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-329029",
	                        "5e3ca6f84267"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-329029 -n ingress-addon-legacy-329029
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-329029 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-329029 logs -n 25: (1.325573278s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-395953                                                   | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1179451438/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-395953                                                   | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1179451438/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-395953 ssh findmnt                                          | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-395953 ssh findmnt                                          | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC | 26 Feb 24 11:55 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-395953 ssh findmnt                                          | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC | 26 Feb 24 11:55 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-395953 ssh findmnt                                          | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC | 26 Feb 24 11:55 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-395953                                                   | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-395953                                                      | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC | 26 Feb 24 11:55 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-395953                                                      | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC | 26 Feb 24 11:55 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-395953                                                      | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC | 26 Feb 24 11:55 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-395953                                                      | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC | 26 Feb 24 11:55 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-395953                                                      | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC | 26 Feb 24 11:55 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-395953                                                      | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC | 26 Feb 24 11:55 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-395953                                                      | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC | 26 Feb 24 11:55 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-395953 ssh pgrep                                            | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-395953 image build -t                                       | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC | 26 Feb 24 11:55 UTC |
	|                | localhost/my-image:functional-395953                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-395953 image ls                                             | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC | 26 Feb 24 11:55 UTC |
	| delete         | -p functional-395953                                                   | functional-395953           | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC | 26 Feb 24 11:55 UTC |
	| start          | -p ingress-addon-legacy-329029                                         | ingress-addon-legacy-329029 | jenkins | v1.32.0 | 26 Feb 24 11:55 UTC | 26 Feb 24 11:57 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-329029                                            | ingress-addon-legacy-329029 | jenkins | v1.32.0 | 26 Feb 24 11:57 UTC | 26 Feb 24 11:57 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-329029                                            | ingress-addon-legacy-329029 | jenkins | v1.32.0 | 26 Feb 24 11:57 UTC | 26 Feb 24 11:57 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-329029                                            | ingress-addon-legacy-329029 | jenkins | v1.32.0 | 26 Feb 24 11:57 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-329029 ip                                         | ingress-addon-legacy-329029 | jenkins | v1.32.0 | 26 Feb 24 11:59 UTC | 26 Feb 24 11:59 UTC |
	| addons         | ingress-addon-legacy-329029                                            | ingress-addon-legacy-329029 | jenkins | v1.32.0 | 26 Feb 24 12:00 UTC | 26 Feb 24 12:00 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-329029                                            | ingress-addon-legacy-329029 | jenkins | v1.32.0 | 26 Feb 24 12:00 UTC | 26 Feb 24 12:00 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 11:55:39
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 11:55:39.557147  642163 out.go:291] Setting OutFile to fd 1 ...
	I0226 11:55:39.557407  642163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:55:39.557439  642163 out.go:304] Setting ErrFile to fd 2...
	I0226 11:55:39.557461  642163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:55:39.557720  642163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
	I0226 11:55:39.558176  642163 out.go:298] Setting JSON to false
	I0226 11:55:39.559079  642163 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":88686,"bootTime":1708859854,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0226 11:55:39.559179  642163 start.go:139] virtualization:  
	I0226 11:55:39.561740  642163 out.go:177] * [ingress-addon-legacy-329029] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0226 11:55:39.564461  642163 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 11:55:39.566055  642163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 11:55:39.564590  642163 notify.go:220] Checking for updates...
	I0226 11:55:39.569549  642163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 11:55:39.571419  642163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	I0226 11:55:39.573168  642163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0226 11:55:39.575030  642163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 11:55:39.577038  642163 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 11:55:39.598416  642163 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0226 11:55:39.598542  642163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:55:39.670700  642163 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:47 SystemTime:2024-02-26 11:55:39.66125935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 11:55:39.670812  642163 docker.go:295] overlay module found
	I0226 11:55:39.674293  642163 out.go:177] * Using the docker driver based on user configuration
	I0226 11:55:39.675992  642163 start.go:299] selected driver: docker
	I0226 11:55:39.676009  642163 start.go:903] validating driver "docker" against <nil>
	I0226 11:55:39.676023  642163 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 11:55:39.676721  642163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:55:39.740764  642163 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:47 SystemTime:2024-02-26 11:55:39.731062361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 11:55:39.740927  642163 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 11:55:39.741156  642163 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0226 11:55:39.743054  642163 out.go:177] * Using Docker driver with root privileges
	I0226 11:55:39.745116  642163 cni.go:84] Creating CNI manager for ""
	I0226 11:55:39.745146  642163 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 11:55:39.745157  642163 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0226 11:55:39.745169  642163 start_flags.go:323] config:
	{Name:ingress-addon-legacy-329029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-329029 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:55:39.747380  642163 out.go:177] * Starting control plane node ingress-addon-legacy-329029 in cluster ingress-addon-legacy-329029
	I0226 11:55:39.749188  642163 cache.go:121] Beginning downloading kic base image for docker with crio
	I0226 11:55:39.751176  642163 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 11:55:39.753051  642163 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0226 11:55:39.753060  642163 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 11:55:39.767645  642163 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 11:55:39.767671  642163 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 11:55:39.815339  642163 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0226 11:55:39.815377  642163 cache.go:56] Caching tarball of preloaded images
	I0226 11:55:39.815554  642163 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0226 11:55:39.817682  642163 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0226 11:55:39.819385  642163 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0226 11:55:39.931718  642163 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0226 11:55:48.086223  642163 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0226 11:55:48.086341  642163 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0226 11:55:49.282268  642163 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0226 11:55:49.282662  642163 profile.go:148] Saving config to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/config.json ...
	I0226 11:55:49.282700  642163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/config.json: {Name:mk5189712c6dda80c2da246fc0b0528318d7a29c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:55:49.282921  642163 cache.go:194] Successfully downloaded all kic artifacts
	I0226 11:55:49.282951  642163 start.go:365] acquiring machines lock for ingress-addon-legacy-329029: {Name:mk989cfd0b6e8b2bfa224cecbb065780fc3cd615 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 11:55:49.283021  642163 start.go:369] acquired machines lock for "ingress-addon-legacy-329029" in 53.405µs
	I0226 11:55:49.283047  642163 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-329029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-329029 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0226 11:55:49.283129  642163 start.go:125] createHost starting for "" (driver="docker")
	I0226 11:55:49.285434  642163 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0226 11:55:49.285719  642163 start.go:159] libmachine.API.Create for "ingress-addon-legacy-329029" (driver="docker")
	I0226 11:55:49.285753  642163 client.go:168] LocalClient.Create starting
	I0226 11:55:49.285826  642163 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem
	I0226 11:55:49.285864  642163 main.go:141] libmachine: Decoding PEM data...
	I0226 11:55:49.285883  642163 main.go:141] libmachine: Parsing certificate...
	I0226 11:55:49.285954  642163 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem
	I0226 11:55:49.285976  642163 main.go:141] libmachine: Decoding PEM data...
	I0226 11:55:49.285991  642163 main.go:141] libmachine: Parsing certificate...
	I0226 11:55:49.286382  642163 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-329029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0226 11:55:49.301846  642163 cli_runner.go:211] docker network inspect ingress-addon-legacy-329029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0226 11:55:49.301938  642163 network_create.go:281] running [docker network inspect ingress-addon-legacy-329029] to gather additional debugging logs...
	I0226 11:55:49.301955  642163 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-329029
	W0226 11:55:49.321564  642163 cli_runner.go:211] docker network inspect ingress-addon-legacy-329029 returned with exit code 1
	I0226 11:55:49.321600  642163 network_create.go:284] error running [docker network inspect ingress-addon-legacy-329029]: docker network inspect ingress-addon-legacy-329029: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-329029 not found
	I0226 11:55:49.321615  642163 network_create.go:286] output of [docker network inspect ingress-addon-legacy-329029]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-329029 not found
	
	** /stderr **
	I0226 11:55:49.321727  642163 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 11:55:49.337252  642163 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40000f7950}
	I0226 11:55:49.337291  642163 network_create.go:124] attempt to create docker network ingress-addon-legacy-329029 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0226 11:55:49.337349  642163 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-329029 ingress-addon-legacy-329029
	I0226 11:55:49.397134  642163 network_create.go:108] docker network ingress-addon-legacy-329029 192.168.49.0/24 created
	I0226 11:55:49.397173  642163 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-329029" container
	I0226 11:55:49.397251  642163 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0226 11:55:49.412177  642163 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-329029 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-329029 --label created_by.minikube.sigs.k8s.io=true
	I0226 11:55:49.430934  642163 oci.go:103] Successfully created a docker volume ingress-addon-legacy-329029
	I0226 11:55:49.431020  642163 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-329029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-329029 --entrypoint /usr/bin/test -v ingress-addon-legacy-329029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0226 11:55:50.932846  642163 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-329029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-329029 --entrypoint /usr/bin/test -v ingress-addon-legacy-329029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib: (1.501773989s)
	I0226 11:55:50.932879  642163 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-329029
	I0226 11:55:50.932898  642163 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0226 11:55:50.932917  642163 kic.go:194] Starting extracting preloaded images to volume ...
	I0226 11:55:50.933006  642163 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-329029:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0226 11:55:55.912802  642163 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-329029:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (4.979758799s)
	I0226 11:55:55.912839  642163 kic.go:203] duration metric: took 4.979918 seconds to extract preloaded images to volume
	W0226 11:55:55.912972  642163 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0226 11:55:55.913092  642163 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0226 11:55:55.974463  642163 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-329029 --name ingress-addon-legacy-329029 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-329029 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-329029 --network ingress-addon-legacy-329029 --ip 192.168.49.2 --volume ingress-addon-legacy-329029:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0226 11:55:56.311680  642163 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-329029 --format={{.State.Running}}
	I0226 11:55:56.334934  642163 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-329029 --format={{.State.Status}}
	I0226 11:55:56.357375  642163 cli_runner.go:164] Run: docker exec ingress-addon-legacy-329029 stat /var/lib/dpkg/alternatives/iptables
	I0226 11:55:56.419742  642163 oci.go:144] the created container "ingress-addon-legacy-329029" has a running status.
	I0226 11:55:56.419779  642163 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/ingress-addon-legacy-329029/id_rsa...
	I0226 11:55:56.726080  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/ingress-addon-legacy-329029/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0226 11:55:56.726132  642163 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18222-608626/.minikube/machines/ingress-addon-legacy-329029/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0226 11:55:56.768991  642163 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-329029 --format={{.State.Status}}
	I0226 11:55:56.816958  642163 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0226 11:55:56.816978  642163 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-329029 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0226 11:55:56.890939  642163 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-329029 --format={{.State.Status}}
	I0226 11:55:56.921877  642163 machine.go:88] provisioning docker machine ...
	I0226 11:55:56.921914  642163 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-329029"
	I0226 11:55:56.921985  642163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-329029
	I0226 11:55:56.954323  642163 main.go:141] libmachine: Using SSH client type: native
	I0226 11:55:56.954592  642163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 36816 <nil> <nil>}
	I0226 11:55:56.954610  642163 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-329029 && echo "ingress-addon-legacy-329029" | sudo tee /etc/hostname
	I0226 11:55:56.955248  642163 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57960->127.0.0.1:36816: read: connection reset by peer
	I0226 11:56:00.240455  642163 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-329029
	
	I0226 11:56:00.240570  642163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-329029
	I0226 11:56:00.280823  642163 main.go:141] libmachine: Using SSH client type: native
	I0226 11:56:00.281098  642163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 36816 <nil> <nil>}
	I0226 11:56:00.281118  642163 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-329029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-329029/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-329029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 11:56:00.510358  642163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 11:56:00.510388  642163 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18222-608626/.minikube CaCertPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18222-608626/.minikube}
	I0226 11:56:00.510414  642163 ubuntu.go:177] setting up certificates
	I0226 11:56:00.510428  642163 provision.go:83] configureAuth start
	I0226 11:56:00.510518  642163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-329029
	I0226 11:56:00.530641  642163 provision.go:138] copyHostCerts
	I0226 11:56:00.530691  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem
	I0226 11:56:00.530729  642163 exec_runner.go:144] found /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem, removing ...
	I0226 11:56:00.530741  642163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem
	I0226 11:56:00.530840  642163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem (1679 bytes)
	I0226 11:56:00.530932  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem
	I0226 11:56:00.530957  642163 exec_runner.go:144] found /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem, removing ...
	I0226 11:56:00.530962  642163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem
	I0226 11:56:00.530994  642163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem (1082 bytes)
	I0226 11:56:00.531047  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem
	I0226 11:56:00.531069  642163 exec_runner.go:144] found /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem, removing ...
	I0226 11:56:00.531076  642163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem
	I0226 11:56:00.531106  642163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem (1123 bytes)
	I0226 11:56:00.531162  642163 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-329029 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-329029]
	I0226 11:56:00.668567  642163 provision.go:172] copyRemoteCerts
	I0226 11:56:00.668643  642163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 11:56:00.668704  642163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-329029
	I0226 11:56:00.685093  642163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36816 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/ingress-addon-legacy-329029/id_rsa Username:docker}
	I0226 11:56:00.785538  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0226 11:56:00.785601  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0226 11:56:00.812382  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0226 11:56:00.812445  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0226 11:56:00.836451  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0226 11:56:00.836513  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0226 11:56:00.860045  642163 provision.go:86] duration metric: configureAuth took 349.601847ms
	I0226 11:56:00.860115  642163 ubuntu.go:193] setting minikube options for container-runtime
	I0226 11:56:00.860328  642163 config.go:182] Loaded profile config "ingress-addon-legacy-329029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0226 11:56:00.860455  642163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-329029
	I0226 11:56:00.876006  642163 main.go:141] libmachine: Using SSH client type: native
	I0226 11:56:00.876258  642163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 36816 <nil> <nil>}
	I0226 11:56:00.876277  642163 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0226 11:56:01.162270  642163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0226 11:56:01.162300  642163 machine.go:91] provisioned docker machine in 4.240399135s
	I0226 11:56:01.162311  642163 client.go:171] LocalClient.Create took 11.876549412s
	I0226 11:56:01.162339  642163 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-329029" took 11.87661931s
	I0226 11:56:01.162352  642163 start.go:300] post-start starting for "ingress-addon-legacy-329029" (driver="docker")
	I0226 11:56:01.162365  642163 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 11:56:01.162441  642163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 11:56:01.162497  642163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-329029
	I0226 11:56:01.183442  642163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36816 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/ingress-addon-legacy-329029/id_rsa Username:docker}
	I0226 11:56:01.287203  642163 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 11:56:01.290923  642163 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 11:56:01.290965  642163 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 11:56:01.290977  642163 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 11:56:01.290986  642163 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 11:56:01.290997  642163 filesync.go:126] Scanning /home/jenkins/minikube-integration/18222-608626/.minikube/addons for local assets ...
	I0226 11:56:01.291063  642163 filesync.go:126] Scanning /home/jenkins/minikube-integration/18222-608626/.minikube/files for local assets ...
	I0226 11:56:01.291170  642163 filesync.go:149] local asset: /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem -> 6139882.pem in /etc/ssl/certs
	I0226 11:56:01.291182  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem -> /etc/ssl/certs/6139882.pem
	I0226 11:56:01.291321  642163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 11:56:01.301239  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem --> /etc/ssl/certs/6139882.pem (1708 bytes)
	I0226 11:56:01.329153  642163 start.go:303] post-start completed in 166.784885ms
	I0226 11:56:01.329622  642163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-329029
	I0226 11:56:01.347997  642163 profile.go:148] Saving config to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/config.json ...
	I0226 11:56:01.348301  642163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 11:56:01.348361  642163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-329029
	I0226 11:56:01.365596  642163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36816 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/ingress-addon-legacy-329029/id_rsa Username:docker}
	I0226 11:56:01.462045  642163 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 11:56:01.466910  642163 start.go:128] duration metric: createHost completed in 12.183762291s
	I0226 11:56:01.466937  642163 start.go:83] releasing machines lock for "ingress-addon-legacy-329029", held for 12.183903276s
	I0226 11:56:01.467026  642163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-329029
	I0226 11:56:01.483616  642163 ssh_runner.go:195] Run: cat /version.json
	I0226 11:56:01.483671  642163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-329029
	I0226 11:56:01.483944  642163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 11:56:01.484013  642163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-329029
	I0226 11:56:01.504977  642163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36816 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/ingress-addon-legacy-329029/id_rsa Username:docker}
	I0226 11:56:01.513307  642163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36816 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/ingress-addon-legacy-329029/id_rsa Username:docker}
	I0226 11:56:01.736905  642163 ssh_runner.go:195] Run: systemctl --version
	I0226 11:56:01.742138  642163 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0226 11:56:01.886136  642163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 11:56:01.890600  642163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0226 11:56:01.912820  642163 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0226 11:56:01.912948  642163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0226 11:56:01.954656  642163 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0226 11:56:01.954730  642163 start.go:475] detecting cgroup driver to use...
	I0226 11:56:01.954777  642163 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 11:56:01.954869  642163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0226 11:56:01.973314  642163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0226 11:56:01.986032  642163 docker.go:217] disabling cri-docker service (if available) ...
	I0226 11:56:01.986153  642163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0226 11:56:02.001988  642163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0226 11:56:02.019726  642163 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0226 11:56:02.113417  642163 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0226 11:56:02.204093  642163 docker.go:233] disabling docker service ...
	I0226 11:56:02.204176  642163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0226 11:56:02.227261  642163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0226 11:56:02.239545  642163 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0226 11:56:02.323357  642163 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0226 11:56:02.421083  642163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0226 11:56:02.432428  642163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 11:56:02.448980  642163 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0226 11:56:02.449088  642163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 11:56:02.458594  642163 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0226 11:56:02.458684  642163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 11:56:02.468566  642163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 11:56:02.478121  642163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 11:56:02.487494  642163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 11:56:02.496957  642163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 11:56:02.505335  642163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 11:56:02.513931  642163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 11:56:02.594160  642163 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0226 11:56:02.703258  642163 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0226 11:56:02.703382  642163 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0226 11:56:02.707086  642163 start.go:543] Will wait 60s for crictl version
	I0226 11:56:02.707190  642163 ssh_runner.go:195] Run: which crictl
	I0226 11:56:02.710498  642163 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0226 11:56:02.751976  642163 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0226 11:56:02.752125  642163 ssh_runner.go:195] Run: crio --version
	I0226 11:56:02.790583  642163 ssh_runner.go:195] Run: crio --version
	I0226 11:56:02.833076  642163 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0226 11:56:02.835362  642163 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-329029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 11:56:02.850251  642163 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0226 11:56:02.854024  642163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 11:56:02.865013  642163 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0226 11:56:02.865087  642163 ssh_runner.go:195] Run: sudo crictl images --output json
	I0226 11:56:02.914768  642163 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0226 11:56:02.914850  642163 ssh_runner.go:195] Run: which lz4
	I0226 11:56:02.918085  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0226 11:56:02.918186  642163 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0226 11:56:02.921292  642163 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0226 11:56:02.921324  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0226 11:56:05.059290  642163 crio.go:444] Took 2.141140 seconds to copy over tarball
	I0226 11:56:05.059410  642163 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0226 11:56:07.741566  642163 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.682104178s)
	I0226 11:56:07.741636  642163 crio.go:451] Took 2.682277 seconds to extract the tarball
	I0226 11:56:07.741654  642163 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0226 11:56:07.826052  642163 ssh_runner.go:195] Run: sudo crictl images --output json
	I0226 11:56:07.862963  642163 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0226 11:56:07.862988  642163 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0226 11:56:07.863032  642163 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:56:07.863083  642163 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0226 11:56:07.863253  642163 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0226 11:56:07.863308  642163 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0226 11:56:07.863347  642163 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0226 11:56:07.863471  642163 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0226 11:56:07.863566  642163 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0226 11:56:07.863476  642163 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0226 11:56:07.866792  642163 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0226 11:56:07.867195  642163 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0226 11:56:07.867373  642163 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0226 11:56:07.867501  642163 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0226 11:56:07.867611  642163 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:56:07.867829  642163 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0226 11:56:07.868109  642163 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0226 11:56:07.868536  642163 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	W0226 11:56:08.175220  642163 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0226 11:56:08.175430  642163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0226 11:56:08.214311  642163 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0226 11:56:08.214584  642163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W0226 11:56:08.217729  642163 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0226 11:56:08.217995  642163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0226 11:56:08.221126  642163 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0226 11:56:08.221368  642163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0226 11:56:08.235189  642163 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0226 11:56:08.235286  642163 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0226 11:56:08.235363  642163 ssh_runner.go:195] Run: which crictl
	W0226 11:56:08.242512  642163 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0226 11:56:08.242817  642163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0226 11:56:08.245629  642163 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0226 11:56:08.245797  642163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0226 11:56:08.247442  642163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0226 11:56:08.331131  642163 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0226 11:56:08.331281  642163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:56:08.335508  642163 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0226 11:56:08.335550  642163 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0226 11:56:08.335604  642163 ssh_runner.go:195] Run: which crictl
	I0226 11:56:08.380104  642163 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0226 11:56:08.380148  642163 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0226 11:56:08.380196  642163 ssh_runner.go:195] Run: which crictl
	I0226 11:56:08.380282  642163 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0226 11:56:08.380301  642163 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0226 11:56:08.380327  642163 ssh_runner.go:195] Run: which crictl
	I0226 11:56:08.380389  642163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0226 11:56:08.405306  642163 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0226 11:56:08.405350  642163 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0226 11:56:08.405400  642163 ssh_runner.go:195] Run: which crictl
	I0226 11:56:08.405483  642163 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0226 11:56:08.405501  642163 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0226 11:56:08.405526  642163 ssh_runner.go:195] Run: which crictl
	I0226 11:56:08.413553  642163 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0226 11:56:08.413597  642163 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0226 11:56:08.413641  642163 ssh_runner.go:195] Run: which crictl
	I0226 11:56:08.560710  642163 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0226 11:56:08.560767  642163 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:56:08.560820  642163 ssh_runner.go:195] Run: which crictl
	I0226 11:56:08.560933  642163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0226 11:56:08.561021  642163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18222-608626/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0226 11:56:08.561067  642163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0226 11:56:08.561144  642163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0226 11:56:08.561212  642163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0226 11:56:08.561263  642163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0226 11:56:08.561319  642163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0226 11:56:08.711497  642163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18222-608626/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0226 11:56:08.711575  642163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18222-608626/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0226 11:56:08.711631  642163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18222-608626/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0226 11:56:08.711675  642163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18222-608626/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0226 11:56:08.711716  642163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18222-608626/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0226 11:56:08.711777  642163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:56:08.711859  642163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18222-608626/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0226 11:56:08.773079  642163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18222-608626/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0226 11:56:08.773152  642163 cache_images.go:92] LoadImages completed in 910.1499ms
	W0226 11:56:08.773218  642163 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18222-608626/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0226 11:56:08.773286  642163 ssh_runner.go:195] Run: crio config
	I0226 11:56:08.820883  642163 cni.go:84] Creating CNI manager for ""
	I0226 11:56:08.820954  642163 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 11:56:08.820987  642163 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 11:56:08.821037  642163 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-329029 NodeName:ingress-addon-legacy-329029 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0226 11:56:08.821231  642163 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-329029"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 11:56:08.821374  642163 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-329029 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-329029 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 11:56:08.821475  642163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0226 11:56:08.830271  642163 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 11:56:08.830345  642163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 11:56:08.838670  642163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0226 11:56:08.855837  642163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0226 11:56:08.874074  642163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0226 11:56:08.892497  642163 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0226 11:56:08.896152  642163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 11:56:08.906761  642163 certs.go:56] Setting up /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029 for IP: 192.168.49.2
	I0226 11:56:08.906857  642163 certs.go:190] acquiring lock for shared ca certs: {Name:mkc71f6ba94614715b3b8dc8b06b5f59e5f1adfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:56:08.907013  642163 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.key
	I0226 11:56:08.907071  642163 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.key
	I0226 11:56:08.907124  642163 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.key
	I0226 11:56:08.907140  642163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt with IP's: []
	I0226 11:56:09.589035  642163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt ...
	I0226 11:56:09.589068  642163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: {Name:mkbc4cdaab91102047920e5a58720328a3eb9dc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:56:09.589271  642163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.key ...
	I0226 11:56:09.589287  642163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.key: {Name:mk5ee725cc630060c691ffc72e9ecbe837b18d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:56:09.589381  642163 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/apiserver.key.dd3b5fb2
	I0226 11:56:09.589405  642163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0226 11:56:10.108366  642163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/apiserver.crt.dd3b5fb2 ...
	I0226 11:56:10.108401  642163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/apiserver.crt.dd3b5fb2: {Name:mk0a0eda03cd8a3ca4711d9d7c818659e09350df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:56:10.108616  642163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/apiserver.key.dd3b5fb2 ...
	I0226 11:56:10.108632  642163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/apiserver.key.dd3b5fb2: {Name:mk1233b9bb0a1b0b653fa541ff2f8391704d6311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:56:10.108742  642163 certs.go:337] copying /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/apiserver.crt
	I0226 11:56:10.108839  642163 certs.go:341] copying /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/apiserver.key
	I0226 11:56:10.108909  642163 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/proxy-client.key
	I0226 11:56:10.108927  642163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/proxy-client.crt with IP's: []
	I0226 11:56:10.395779  642163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/proxy-client.crt ...
	I0226 11:56:10.395812  642163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/proxy-client.crt: {Name:mkca6c80c44c9496ce9ec3265a2698287d86b183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:56:10.396009  642163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/proxy-client.key ...
	I0226 11:56:10.396025  642163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/proxy-client.key: {Name:mkd0f83323117fb79a06d27889c90c2c3c33d8fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:56:10.396114  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0226 11:56:10.396179  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0226 11:56:10.396197  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0226 11:56:10.396209  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0226 11:56:10.396225  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0226 11:56:10.396241  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0226 11:56:10.396256  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0226 11:56:10.396269  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0226 11:56:10.396324  642163 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/613988.pem (1338 bytes)
	W0226 11:56:10.396364  642163 certs.go:433] ignoring /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/613988_empty.pem, impossibly tiny 0 bytes
	I0226 11:56:10.396424  642163 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca-key.pem (1675 bytes)
	I0226 11:56:10.396455  642163 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem (1082 bytes)
	I0226 11:56:10.396482  642163 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem (1123 bytes)
	I0226 11:56:10.396511  642163 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem (1679 bytes)
	I0226 11:56:10.396563  642163 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem (1708 bytes)
	I0226 11:56:10.396598  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:56:10.396613  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/613988.pem -> /usr/share/ca-certificates/613988.pem
	I0226 11:56:10.396622  642163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem -> /usr/share/ca-certificates/6139882.pem
	I0226 11:56:10.397279  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 11:56:10.422690  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0226 11:56:10.447224  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 11:56:10.472111  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0226 11:56:10.496333  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 11:56:10.520599  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0226 11:56:10.545767  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 11:56:10.569603  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0226 11:56:10.594388  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 11:56:10.618821  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/certs/613988.pem --> /usr/share/ca-certificates/613988.pem (1338 bytes)
	I0226 11:56:10.642914  642163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem --> /usr/share/ca-certificates/6139882.pem (1708 bytes)
	I0226 11:56:10.667530  642163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 11:56:10.685922  642163 ssh_runner.go:195] Run: openssl version
	I0226 11:56:10.691587  642163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/613988.pem && ln -fs /usr/share/ca-certificates/613988.pem /etc/ssl/certs/613988.pem"
	I0226 11:56:10.700953  642163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/613988.pem
	I0226 11:56:10.704468  642163 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 11:52 /usr/share/ca-certificates/613988.pem
	I0226 11:56:10.704532  642163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/613988.pem
	I0226 11:56:10.711382  642163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/613988.pem /etc/ssl/certs/51391683.0"
	I0226 11:56:10.720809  642163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6139882.pem && ln -fs /usr/share/ca-certificates/6139882.pem /etc/ssl/certs/6139882.pem"
	I0226 11:56:10.730183  642163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6139882.pem
	I0226 11:56:10.733816  642163 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 11:52 /usr/share/ca-certificates/6139882.pem
	I0226 11:56:10.733877  642163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6139882.pem
	I0226 11:56:10.740956  642163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6139882.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 11:56:10.750362  642163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 11:56:10.759765  642163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:56:10.763086  642163 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 11:45 /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:56:10.763147  642163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 11:56:10.770277  642163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 11:56:10.780214  642163 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 11:56:10.783630  642163 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0226 11:56:10.783724  642163 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-329029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-329029 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:56:10.783808  642163 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0226 11:56:10.783869  642163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0226 11:56:10.821306  642163 cri.go:89] found id: ""
	I0226 11:56:10.821386  642163 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 11:56:10.830295  642163 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 11:56:10.839096  642163 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 11:56:10.839200  642163 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 11:56:10.848102  642163 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 11:56:10.848171  642163 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 11:56:10.901891  642163 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0226 11:56:10.902160  642163 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 11:56:10.948974  642163 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0226 11:56:10.949052  642163 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0226 11:56:10.949113  642163 kubeadm.go:322] OS: Linux
	I0226 11:56:10.949162  642163 kubeadm.go:322] CGROUPS_CPU: enabled
	I0226 11:56:10.949214  642163 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0226 11:56:10.949263  642163 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0226 11:56:10.949313  642163 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0226 11:56:10.949364  642163 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0226 11:56:10.949414  642163 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0226 11:56:11.036692  642163 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 11:56:11.036846  642163 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 11:56:11.036966  642163 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 11:56:11.252407  642163 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 11:56:11.254100  642163 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 11:56:11.254193  642163 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0226 11:56:11.352007  642163 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 11:56:11.356562  642163 out.go:204]   - Generating certificates and keys ...
	I0226 11:56:11.356827  642163 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 11:56:11.356969  642163 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 11:56:12.062890  642163 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0226 11:56:12.598852  642163 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0226 11:56:13.411036  642163 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0226 11:56:13.725642  642163 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0226 11:56:13.993926  642163 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0226 11:56:13.994297  642163 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-329029 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0226 11:56:14.374750  642163 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0226 11:56:14.375116  642163 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-329029 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0226 11:56:14.648955  642163 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0226 11:56:15.096704  642163 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0226 11:56:15.878209  642163 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0226 11:56:15.878638  642163 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 11:56:16.816848  642163 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 11:56:17.031609  642163 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 11:56:17.413236  642163 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 11:56:17.640899  642163 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 11:56:17.641733  642163 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 11:56:17.644062  642163 out.go:204]   - Booting up control plane ...
	I0226 11:56:17.644180  642163 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 11:56:17.651690  642163 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 11:56:17.658946  642163 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 11:56:17.660550  642163 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 11:56:17.663745  642163 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 11:56:30.673424  642163 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.005626 seconds
	I0226 11:56:30.673550  642163 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0226 11:56:30.687369  642163 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0226 11:56:31.210407  642163 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0226 11:56:31.210560  642163 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-329029 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0226 11:56:31.718792  642163 kubeadm.go:322] [bootstrap-token] Using token: 99gngt.groceexz7av2npd6
	I0226 11:56:31.720914  642163 out.go:204]   - Configuring RBAC rules ...
	I0226 11:56:31.721044  642163 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0226 11:56:31.725690  642163 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0226 11:56:31.733649  642163 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0226 11:56:31.747799  642163 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0226 11:56:31.750920  642163 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0226 11:56:31.753752  642163 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0226 11:56:31.762604  642163 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0226 11:56:32.010718  642163 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0226 11:56:32.143237  642163 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0226 11:56:32.144524  642163 kubeadm.go:322] 
	I0226 11:56:32.144593  642163 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0226 11:56:32.144598  642163 kubeadm.go:322] 
	I0226 11:56:32.144692  642163 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0226 11:56:32.144698  642163 kubeadm.go:322] 
	I0226 11:56:32.144722  642163 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0226 11:56:32.144802  642163 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0226 11:56:32.144859  642163 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0226 11:56:32.144868  642163 kubeadm.go:322] 
	I0226 11:56:32.144919  642163 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0226 11:56:32.145018  642163 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0226 11:56:32.145115  642163 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0226 11:56:32.145141  642163 kubeadm.go:322] 
	I0226 11:56:32.145233  642163 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0226 11:56:32.145332  642163 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0226 11:56:32.145341  642163 kubeadm.go:322] 
	I0226 11:56:32.145428  642163 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 99gngt.groceexz7av2npd6 \
	I0226 11:56:32.145552  642163 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4951039124412052416f64387a7476aba3429f2071dfaa9a882b475b36ccdccb \
	I0226 11:56:32.145583  642163 kubeadm.go:322]     --control-plane 
	I0226 11:56:32.145588  642163 kubeadm.go:322] 
	I0226 11:56:32.145679  642163 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0226 11:56:32.145684  642163 kubeadm.go:322] 
	I0226 11:56:32.145773  642163 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 99gngt.groceexz7av2npd6 \
	I0226 11:56:32.145884  642163 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4951039124412052416f64387a7476aba3429f2071dfaa9a882b475b36ccdccb 
	I0226 11:56:32.148516  642163 kubeadm.go:322] W0226 11:56:10.901045    1243 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0226 11:56:32.148774  642163 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0226 11:56:32.148906  642163 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 11:56:32.149110  642163 kubeadm.go:322] W0226 11:56:17.656771    1243 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0226 11:56:32.149257  642163 kubeadm.go:322] W0226 11:56:17.659059    1243 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0226 11:56:32.149277  642163 cni.go:84] Creating CNI manager for ""
	I0226 11:56:32.149285  642163 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 11:56:32.151465  642163 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0226 11:56:32.153447  642163 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0226 11:56:32.157306  642163 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0226 11:56:32.157323  642163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0226 11:56:32.175844  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0226 11:56:32.603537  642163 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0226 11:56:32.603664  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:32.603737  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4011915ad0e9b27ff42994854397cc2ed93516c6 minikube.k8s.io/name=ingress-addon-legacy-329029 minikube.k8s.io/updated_at=2024_02_26T11_56_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:32.738551  642163 ops.go:34] apiserver oom_adj: -16
	I0226 11:56:32.738668  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:33.239716  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:33.739050  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:34.239611  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:34.739799  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:35.239627  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:35.739364  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:36.239475  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:36.739599  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:37.238830  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:37.738827  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:38.239550  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:38.738753  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:39.238850  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:39.738893  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:40.239561  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:40.739296  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:41.239531  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:41.738904  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:42.239295  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:42.739551  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:43.239260  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:43.738898  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:44.239275  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:44.739373  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:45.238930  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:45.739281  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:46.239233  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:46.739218  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:47.238859  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:47.738844  642163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 11:56:47.840474  642163 kubeadm.go:1088] duration metric: took 15.236857026s to wait for elevateKubeSystemPrivileges.
	I0226 11:56:47.840503  642163 kubeadm.go:406] StartCluster complete in 37.056783598s
	I0226 11:56:47.840521  642163 settings.go:142] acquiring lock: {Name:mk1588246e1eeb31f86f63cf3c470d51f6fe64da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:56:47.840592  642163 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 11:56:47.841321  642163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/kubeconfig: {Name:mk0efe1f972316757632066327a27c71356b5734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 11:56:47.842209  642163 kapi.go:59] client config for ingress-addon-legacy-329029: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt", KeyFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.key", CAFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1702d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 11:56:47.843124  642163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0226 11:56:47.843371  642163 config.go:182] Loaded profile config "ingress-addon-legacy-329029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0226 11:56:47.843402  642163 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0226 11:56:47.843461  642163 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-329029"
	I0226 11:56:47.843475  642163 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-329029"
	I0226 11:56:47.843516  642163 host.go:66] Checking if "ingress-addon-legacy-329029" exists ...
	I0226 11:56:47.843983  642163 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-329029 --format={{.State.Status}}
	I0226 11:56:47.844871  642163 cert_rotation.go:137] Starting client certificate rotation controller
	I0226 11:56:47.844958  642163 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-329029"
	I0226 11:56:47.844977  642163 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-329029"
	I0226 11:56:47.845413  642163 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-329029 --format={{.State.Status}}
	I0226 11:56:47.879752  642163 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 11:56:47.883275  642163 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 11:56:47.883300  642163 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0226 11:56:47.883383  642163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-329029
	I0226 11:56:47.908845  642163 kapi.go:59] client config for ingress-addon-legacy-329029: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt", KeyFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.key", CAFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1702d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 11:56:47.909172  642163 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-329029"
	I0226 11:56:47.909383  642163 host.go:66] Checking if "ingress-addon-legacy-329029" exists ...
	I0226 11:56:47.910556  642163 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-329029 --format={{.State.Status}}
	I0226 11:56:47.941965  642163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36816 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/ingress-addon-legacy-329029/id_rsa Username:docker}
	I0226 11:56:47.952991  642163 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0226 11:56:47.953014  642163 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0226 11:56:47.953087  642163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-329029
	I0226 11:56:47.980248  642163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36816 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/ingress-addon-legacy-329029/id_rsa Username:docker}
	I0226 11:56:48.076326  642163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0226 11:56:48.127606  642163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 11:56:48.183761  642163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0226 11:56:48.484617  642163 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-329029" context rescaled to 1 replicas
	I0226 11:56:48.484682  642163 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0226 11:56:48.486865  642163 out.go:177] * Verifying Kubernetes components...
	I0226 11:56:48.489557  642163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 11:56:48.614503  642163 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0226 11:56:48.776855  642163 kapi.go:59] client config for ingress-addon-legacy-329029: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt", KeyFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.key", CAFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1702d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 11:56:48.777110  642163 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-329029" to be "Ready" ...
	I0226 11:56:48.789110  642163 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0226 11:56:48.791273  642163 addons.go:505] enable addons completed in 947.854599ms: enabled=[storage-provisioner default-storageclass]
	I0226 11:56:50.782506  642163 node_ready.go:58] node "ingress-addon-legacy-329029" has status "Ready":"False"
	I0226 11:56:53.280903  642163 node_ready.go:58] node "ingress-addon-legacy-329029" has status "Ready":"False"
	I0226 11:56:55.281254  642163 node_ready.go:58] node "ingress-addon-legacy-329029" has status "Ready":"False"
	I0226 11:56:55.781029  642163 node_ready.go:49] node "ingress-addon-legacy-329029" has status "Ready":"True"
	I0226 11:56:55.781057  642163 node_ready.go:38] duration metric: took 7.003925779s waiting for node "ingress-addon-legacy-329029" to be "Ready" ...
	I0226 11:56:55.781069  642163 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 11:56:55.793345  642163 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-vsnbs" in "kube-system" namespace to be "Ready" ...
	I0226 11:56:57.798026  642163 pod_ready.go:102] pod "coredns-66bff467f8-vsnbs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-26 11:56:47 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0226 11:57:00.306604  642163 pod_ready.go:102] pod "coredns-66bff467f8-vsnbs" in "kube-system" namespace has status "Ready":"False"
	I0226 11:57:02.799347  642163 pod_ready.go:102] pod "coredns-66bff467f8-vsnbs" in "kube-system" namespace has status "Ready":"False"
	I0226 11:57:03.798598  642163 pod_ready.go:92] pod "coredns-66bff467f8-vsnbs" in "kube-system" namespace has status "Ready":"True"
	I0226 11:57:03.798621  642163 pod_ready.go:81] duration metric: took 8.005243704s waiting for pod "coredns-66bff467f8-vsnbs" in "kube-system" namespace to be "Ready" ...
	I0226 11:57:03.798633  642163 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-329029" in "kube-system" namespace to be "Ready" ...
	I0226 11:57:03.803195  642163 pod_ready.go:92] pod "etcd-ingress-addon-legacy-329029" in "kube-system" namespace has status "Ready":"True"
	I0226 11:57:03.803223  642163 pod_ready.go:81] duration metric: took 4.58224ms waiting for pod "etcd-ingress-addon-legacy-329029" in "kube-system" namespace to be "Ready" ...
	I0226 11:57:03.803239  642163 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-329029" in "kube-system" namespace to be "Ready" ...
	I0226 11:57:03.808041  642163 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-329029" in "kube-system" namespace has status "Ready":"True"
	I0226 11:57:03.808070  642163 pod_ready.go:81] duration metric: took 4.821257ms waiting for pod "kube-apiserver-ingress-addon-legacy-329029" in "kube-system" namespace to be "Ready" ...
	I0226 11:57:03.808082  642163 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-329029" in "kube-system" namespace to be "Ready" ...
	I0226 11:57:03.813223  642163 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-329029" in "kube-system" namespace has status "Ready":"True"
	I0226 11:57:03.813253  642163 pod_ready.go:81] duration metric: took 5.141108ms waiting for pod "kube-controller-manager-ingress-addon-legacy-329029" in "kube-system" namespace to be "Ready" ...
	I0226 11:57:03.813266  642163 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ccbz2" in "kube-system" namespace to be "Ready" ...
	I0226 11:57:03.820773  642163 pod_ready.go:92] pod "kube-proxy-ccbz2" in "kube-system" namespace has status "Ready":"True"
	I0226 11:57:03.820848  642163 pod_ready.go:81] duration metric: took 7.573688ms waiting for pod "kube-proxy-ccbz2" in "kube-system" namespace to be "Ready" ...
	I0226 11:57:03.820875  642163 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-329029" in "kube-system" namespace to be "Ready" ...
	I0226 11:57:03.994315  642163 request.go:629] Waited for 173.315051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-329029
	I0226 11:57:04.194672  642163 request.go:629] Waited for 197.330111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-329029
	I0226 11:57:04.197446  642163 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-329029" in "kube-system" namespace has status "Ready":"True"
	I0226 11:57:04.197477  642163 pod_ready.go:81] duration metric: took 376.578978ms waiting for pod "kube-scheduler-ingress-addon-legacy-329029" in "kube-system" namespace to be "Ready" ...
	I0226 11:57:04.197491  642163 pod_ready.go:38] duration metric: took 8.416404759s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 11:57:04.197506  642163 api_server.go:52] waiting for apiserver process to appear ...
	I0226 11:57:04.197578  642163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 11:57:04.209030  642163 api_server.go:72] duration metric: took 15.724312056s to wait for apiserver process to appear ...
	I0226 11:57:04.209057  642163 api_server.go:88] waiting for apiserver healthz status ...
	I0226 11:57:04.209078  642163 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0226 11:57:04.217657  642163 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0226 11:57:04.218560  642163 api_server.go:141] control plane version: v1.18.20
	I0226 11:57:04.218582  642163 api_server.go:131] duration metric: took 9.51798ms to wait for apiserver health ...
	I0226 11:57:04.218591  642163 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 11:57:04.394031  642163 request.go:629] Waited for 175.325481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0226 11:57:04.400042  642163 system_pods.go:59] 8 kube-system pods found
	I0226 11:57:04.400075  642163 system_pods.go:61] "coredns-66bff467f8-vsnbs" [3bb8aa21-b07e-4147-b939-270b3efb5f87] Running
	I0226 11:57:04.400081  642163 system_pods.go:61] "etcd-ingress-addon-legacy-329029" [30693739-8c9a-4762-bfc0-3ae40764311b] Running
	I0226 11:57:04.400086  642163 system_pods.go:61] "kindnet-fjxth" [1a0a5dd2-7e22-455c-9636-ab03c301de76] Running
	I0226 11:57:04.400090  642163 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-329029" [b8a74db7-121b-4b0b-ad2b-81ef82aec7d3] Running
	I0226 11:57:04.400094  642163 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-329029" [eb36c528-83e5-4e6e-a478-c3cffbdb0c7d] Running
	I0226 11:57:04.400098  642163 system_pods.go:61] "kube-proxy-ccbz2" [34c8b32d-f6cf-447f-be9f-541724d302ff] Running
	I0226 11:57:04.400101  642163 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-329029" [61c56264-3df6-4046-8b90-8a8faea3da55] Running
	I0226 11:57:04.400105  642163 system_pods.go:61] "storage-provisioner" [4a5f8853-d1bf-43f7-841b-0fd3f295f4aa] Running
	I0226 11:57:04.400110  642163 system_pods.go:74] duration metric: took 181.514904ms to wait for pod list to return data ...
	I0226 11:57:04.400124  642163 default_sa.go:34] waiting for default service account to be created ...
	I0226 11:57:04.594027  642163 request.go:629] Waited for 193.820908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0226 11:57:04.596485  642163 default_sa.go:45] found service account: "default"
	I0226 11:57:04.596516  642163 default_sa.go:55] duration metric: took 196.386268ms for default service account to be created ...
	I0226 11:57:04.596527  642163 system_pods.go:116] waiting for k8s-apps to be running ...
	I0226 11:57:04.793965  642163 request.go:629] Waited for 197.349195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0226 11:57:04.799839  642163 system_pods.go:86] 8 kube-system pods found
	I0226 11:57:04.799873  642163 system_pods.go:89] "coredns-66bff467f8-vsnbs" [3bb8aa21-b07e-4147-b939-270b3efb5f87] Running
	I0226 11:57:04.799880  642163 system_pods.go:89] "etcd-ingress-addon-legacy-329029" [30693739-8c9a-4762-bfc0-3ae40764311b] Running
	I0226 11:57:04.799885  642163 system_pods.go:89] "kindnet-fjxth" [1a0a5dd2-7e22-455c-9636-ab03c301de76] Running
	I0226 11:57:04.799890  642163 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-329029" [b8a74db7-121b-4b0b-ad2b-81ef82aec7d3] Running
	I0226 11:57:04.799894  642163 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-329029" [eb36c528-83e5-4e6e-a478-c3cffbdb0c7d] Running
	I0226 11:57:04.799898  642163 system_pods.go:89] "kube-proxy-ccbz2" [34c8b32d-f6cf-447f-be9f-541724d302ff] Running
	I0226 11:57:04.799902  642163 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-329029" [61c56264-3df6-4046-8b90-8a8faea3da55] Running
	I0226 11:57:04.799906  642163 system_pods.go:89] "storage-provisioner" [4a5f8853-d1bf-43f7-841b-0fd3f295f4aa] Running
	I0226 11:57:04.799914  642163 system_pods.go:126] duration metric: took 203.38015ms to wait for k8s-apps to be running ...
	I0226 11:57:04.799926  642163 system_svc.go:44] waiting for kubelet service to be running ....
	I0226 11:57:04.799988  642163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 11:57:04.812761  642163 system_svc.go:56] duration metric: took 12.823963ms WaitForService to wait for kubelet.
	I0226 11:57:04.812807  642163 kubeadm.go:581] duration metric: took 16.328093479s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0226 11:57:04.812828  642163 node_conditions.go:102] verifying NodePressure condition ...
	I0226 11:57:04.994226  642163 request.go:629] Waited for 181.32277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0226 11:57:04.997403  642163 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0226 11:57:04.997447  642163 node_conditions.go:123] node cpu capacity is 2
	I0226 11:57:04.997467  642163 node_conditions.go:105] duration metric: took 184.632496ms to run NodePressure ...
	I0226 11:57:04.997479  642163 start.go:228] waiting for startup goroutines ...
	I0226 11:57:04.997494  642163 start.go:233] waiting for cluster config update ...
	I0226 11:57:04.997518  642163 start.go:242] writing updated cluster config ...
	I0226 11:57:04.997910  642163 ssh_runner.go:195] Run: rm -f paused
	I0226 11:57:05.063528  642163 start.go:601] kubectl: 1.29.2, cluster: 1.18.20 (minor skew: 11)
	I0226 11:57:05.066147  642163 out.go:177] 
	W0226 11:57:05.068242  642163 out.go:239] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0226 11:57:05.070174  642163 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0226 11:57:05.072131  642163 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-329029" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 26 12:00:06 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:06.920425531Z" level=info msg="Removing container: 2738114086dc422585831488d0ce335f07e8761fc9c0918c9156abe52f13b44e" id=9e536d2f-b71c-4a30-9080-7f05f5a888af name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Feb 26 12:00:06 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:06.945692911Z" level=info msg="Removed container 2738114086dc422585831488d0ce335f07e8761fc9c0918c9156abe52f13b44e: default/hello-world-app-5f5d8b66bb-2frbj/hello-world-app" id=9e536d2f-b71c-4a30-9080-7f05f5a888af name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Feb 26 12:00:07 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:07.436798294Z" level=info msg="Stopping pod sandbox: ad5590e4936c1838acb6b6bad76aeacf9ced3240f23d734925f1cf093b9a800d" id=b44b9804-8b35-47f7-a7e3-890617347dd9 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 26 12:00:07 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:07.441404312Z" level=info msg="Stopped pod sandbox: ad5590e4936c1838acb6b6bad76aeacf9ced3240f23d734925f1cf093b9a800d" id=b44b9804-8b35-47f7-a7e3-890617347dd9 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 26 12:00:07 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:07.938316115Z" level=info msg="Stopping pod sandbox: ad5590e4936c1838acb6b6bad76aeacf9ced3240f23d734925f1cf093b9a800d" id=cb955bee-245b-4e19-9491-95dec4a50985 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 26 12:00:07 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:07.938542571Z" level=info msg="Stopped pod sandbox (already stopped): ad5590e4936c1838acb6b6bad76aeacf9ced3240f23d734925f1cf093b9a800d" id=cb955bee-245b-4e19-9491-95dec4a50985 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 26 12:00:08 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:08.821674046Z" level=info msg="Stopping container: f2279ef96d5750d74fc4d11f0f07a8876cf0290f1eea55cc9208594a184dd81e (timeout: 2s)" id=a047a574-c0f0-4ef2-880f-6c28348a713a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 26 12:00:08 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:08.836837459Z" level=info msg="Stopping container: f2279ef96d5750d74fc4d11f0f07a8876cf0290f1eea55cc9208594a184dd81e (timeout: 2s)" id=539df8cd-41a6-4338-9884-cac43ceb33d1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 26 12:00:09 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:09.435734238Z" level=info msg="Stopping pod sandbox: ad5590e4936c1838acb6b6bad76aeacf9ced3240f23d734925f1cf093b9a800d" id=1cd9c6e5-1c0a-4003-9254-458f7a36b1e8 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 26 12:00:09 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:09.435781228Z" level=info msg="Stopped pod sandbox (already stopped): ad5590e4936c1838acb6b6bad76aeacf9ced3240f23d734925f1cf093b9a800d" id=1cd9c6e5-1c0a-4003-9254-458f7a36b1e8 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 26 12:00:10 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:10.839748747Z" level=warning msg="Stopping container f2279ef96d5750d74fc4d11f0f07a8876cf0290f1eea55cc9208594a184dd81e with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=a047a574-c0f0-4ef2-880f-6c28348a713a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 26 12:00:10 ingress-addon-legacy-329029 conmon[2710]: conmon f2279ef96d5750d74fc4 <ninfo>: container 2721 exited with status 137
	Feb 26 12:00:10 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:10.992315465Z" level=info msg="Stopped container f2279ef96d5750d74fc4d11f0f07a8876cf0290f1eea55cc9208594a184dd81e: ingress-nginx/ingress-nginx-controller-7fcf777cb7-dg9nd/controller" id=539df8cd-41a6-4338-9884-cac43ceb33d1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 26 12:00:10 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:10.993023261Z" level=info msg="Stopped container f2279ef96d5750d74fc4d11f0f07a8876cf0290f1eea55cc9208594a184dd81e: ingress-nginx/ingress-nginx-controller-7fcf777cb7-dg9nd/controller" id=a047a574-c0f0-4ef2-880f-6c28348a713a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Feb 26 12:00:10 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:10.993030325Z" level=info msg="Stopping pod sandbox: d9304154bf244490d7079d2806125566e6d4e389ececf704f24b811b957f1b69" id=88ac4b30-16fb-469c-9e0b-1e506125ed0f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 26 12:00:10 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:10.993342850Z" level=info msg="Stopping pod sandbox: d9304154bf244490d7079d2806125566e6d4e389ececf704f24b811b957f1b69" id=a9e4e3d2-cf8f-466e-a73a-e7bb3fd7a1a5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 26 12:00:10 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:10.996395951Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-M3FCSHBLNRV7FQAH - [0:0]\n:KUBE-HP-TSSS5JULI5E4F2ET - [0:0]\n-X KUBE-HP-M3FCSHBLNRV7FQAH\n-X KUBE-HP-TSSS5JULI5E4F2ET\nCOMMIT\n"
	Feb 26 12:00:10 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:10.997777501Z" level=info msg="Closing host port tcp:80"
	Feb 26 12:00:10 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:10.997827427Z" level=info msg="Closing host port tcp:443"
	Feb 26 12:00:10 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:10.998728137Z" level=info msg="Host port tcp:80 does not have an open socket"
	Feb 26 12:00:10 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:10.998752735Z" level=info msg="Host port tcp:443 does not have an open socket"
	Feb 26 12:00:10 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:10.998911279Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-dg9nd Namespace:ingress-nginx ID:d9304154bf244490d7079d2806125566e6d4e389ececf704f24b811b957f1b69 UID:58251f23-1560-4e54-a84e-89e7f38466f8 NetNS:/var/run/netns/783133bd-b3b4-4ba1-b69a-388626fb80d0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 26 12:00:10 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:10.999048965Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-dg9nd from CNI network \"kindnet\" (type=ptp)"
	Feb 26 12:00:11 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:11.018178300Z" level=info msg="Stopped pod sandbox: d9304154bf244490d7079d2806125566e6d4e389ececf704f24b811b957f1b69" id=88ac4b30-16fb-469c-9e0b-1e506125ed0f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Feb 26 12:00:11 ingress-addon-legacy-329029 crio[911]: time="2024-02-26 12:00:11.018303236Z" level=info msg="Stopped pod sandbox (already stopped): d9304154bf244490d7079d2806125566e6d4e389ececf704f24b811b957f1b69" id=a9e4e3d2-cf8f-466e-a73a-e7bb3fd7a1a5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	51872b857bd57       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                   10 seconds ago      Exited              hello-world-app           2                   f0b5bb99254ec       hello-world-app-5f5d8b66bb-2frbj
	1d0d402a820d5       docker.io/library/nginx@sha256:34aa0a372d3220dc0448131f809c72d8085f79bdec8058ad6970fc034a395674                    2 minutes ago       Running             nginx                     0                   4f4ede3296a47       nginx
	f2279ef96d575       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   d9304154bf244       ingress-nginx-controller-7fcf777cb7-dg9nd
	2274d95e2ce2b       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   69aedcd568c58       ingress-nginx-admission-patch-bssq6
	c5acb0a9b822c       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   871424b94b145       ingress-nginx-admission-create-x9npp
	f9dcadd590e00       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   e704cafe4072c       coredns-66bff467f8-vsnbs
	34a97a7b199bc       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   50ae71abc7675       storage-provisioner
	7840f3db6d79a       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988                 3 minutes ago       Running             kindnet-cni               0                   671cc4266c1d0       kindnet-fjxth
	b7a1a28c88b14       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   812b0a45c251a       kube-proxy-ccbz2
	94fb9b698ba7b       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   3 minutes ago       Running             kube-scheduler            0                   0d030fea73c00       kube-scheduler-ingress-addon-legacy-329029
	7ac9be0ad74a0       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   3 minutes ago       Running             kube-controller-manager   0                   699d751dd23a2       kube-controller-manager-ingress-addon-legacy-329029
	8ab6f513b2b20       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   3 minutes ago       Running             etcd                      0                   a215760afa261       etcd-ingress-addon-legacy-329029
	1981d720d6c00       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   3 minutes ago       Running             kube-apiserver            0                   7f3ca61052e36       kube-apiserver-ingress-addon-legacy-329029
	
	
	==> coredns [f9dcadd590e002cee8e8dc78f8e7d3a7a28ecb5e67f469f06dcb2008b9c978cb] <==
	[INFO] 10.244.0.5:57520 - 61478 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029455s
	[INFO] 10.244.0.5:39944 - 45924 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002664123s
	[INFO] 10.244.0.5:57520 - 20551 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003039883s
	[INFO] 10.244.0.5:57520 - 61751 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002419199s
	[INFO] 10.244.0.5:39944 - 3339 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002453348s
	[INFO] 10.244.0.5:57520 - 11625 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000063055s
	[INFO] 10.244.0.5:39944 - 60116 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000228851s
	[INFO] 10.244.0.5:36519 - 37213 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000102775s
	[INFO] 10.244.0.5:48300 - 39979 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000051158s
	[INFO] 10.244.0.5:36519 - 42403 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000046144s
	[INFO] 10.244.0.5:36519 - 5579 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036365s
	[INFO] 10.244.0.5:36519 - 31589 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049656s
	[INFO] 10.244.0.5:36519 - 30757 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046407s
	[INFO] 10.244.0.5:48300 - 17786 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000040057s
	[INFO] 10.244.0.5:36519 - 16664 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044159s
	[INFO] 10.244.0.5:48300 - 2045 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000078168s
	[INFO] 10.244.0.5:48300 - 63037 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040262s
	[INFO] 10.244.0.5:48300 - 51967 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040951s
	[INFO] 10.244.0.5:36519 - 29144 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00137469s
	[INFO] 10.244.0.5:48300 - 54785 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057196s
	[INFO] 10.244.0.5:36519 - 20896 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001274968s
	[INFO] 10.244.0.5:36519 - 26933 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044479s
	[INFO] 10.244.0.5:48300 - 479 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001111535s
	[INFO] 10.244.0.5:48300 - 6779 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000956347s
	[INFO] 10.244.0.5:48300 - 45051 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005055s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-329029
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-329029
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4011915ad0e9b27ff42994854397cc2ed93516c6
	                    minikube.k8s.io/name=ingress-addon-legacy-329029
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_26T11_56_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Feb 2024 11:56:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-329029
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Feb 2024 12:00:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Feb 2024 12:00:05 +0000   Mon, 26 Feb 2024 11:56:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Feb 2024 12:00:05 +0000   Mon, 26 Feb 2024 11:56:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Feb 2024 12:00:05 +0000   Mon, 26 Feb 2024 11:56:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Feb 2024 12:00:05 +0000   Mon, 26 Feb 2024 11:56:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-329029
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 55fcecfd100f4e24b76625bc1e8ae4ee
	  System UUID:                53b22072-1407-4b98-b105-ee03e2cc107f
	  Boot ID:                    18acc680-2ad9-4339-83b8-bdf83df5c458
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-2frbj                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-vsnbs                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m29s
	  kube-system                 etcd-ingress-addon-legacy-329029                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kindnet-fjxth                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m28s
	  kube-system                 kube-apiserver-ingress-addon-legacy-329029             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-329029    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kube-proxy-ccbz2                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-scheduler-ingress-addon-legacy-329029             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m56s (x5 over 3m56s)  kubelet     Node ingress-addon-legacy-329029 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x5 over 3m56s)  kubelet     Node ingress-addon-legacy-329029 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x4 over 3m56s)  kubelet     Node ingress-addon-legacy-329029 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m41s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m41s                  kubelet     Node ingress-addon-legacy-329029 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m41s                  kubelet     Node ingress-addon-legacy-329029 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m41s                  kubelet     Node ingress-addon-legacy-329029 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m28s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m21s                  kubelet     Node ingress-addon-legacy-329029 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001050] FS-Cache: O-key=[8] '34e6c90000000000'
	[  +0.000726] FS-Cache: N-cookie c=000000e3 [p=000000da fl=2 nc=0 na=1]
	[  +0.000928] FS-Cache: N-cookie d=00000000841d89c0{9p.inode} n=0000000032cf39ba
	[  +0.001073] FS-Cache: N-key=[8] '34e6c90000000000'
	[  +0.003158] FS-Cache: Duplicate cookie detected
	[  +0.000716] FS-Cache: O-cookie c=000000dd [p=000000da fl=226 nc=0 na=1]
	[  +0.001109] FS-Cache: O-cookie d=00000000841d89c0{9p.inode} n=00000000e6eb4eb1
	[  +0.001112] FS-Cache: O-key=[8] '34e6c90000000000'
	[  +0.000700] FS-Cache: N-cookie c=000000e4 [p=000000da fl=2 nc=0 na=1]
	[  +0.001083] FS-Cache: N-cookie d=00000000841d89c0{9p.inode} n=00000000ef560195
	[  +0.001212] FS-Cache: N-key=[8] '34e6c90000000000'
	[  +2.493751] FS-Cache: Duplicate cookie detected
	[  +0.000848] FS-Cache: O-cookie c=000000db [p=000000da fl=226 nc=0 na=1]
	[  +0.001068] FS-Cache: O-cookie d=00000000841d89c0{9p.inode} n=00000000bf2216f9
	[  +0.001118] FS-Cache: O-key=[8] '33e6c90000000000'
	[  +0.000721] FS-Cache: N-cookie c=000000e6 [p=000000da fl=2 nc=0 na=1]
	[  +0.001039] FS-Cache: N-cookie d=00000000841d89c0{9p.inode} n=000000004a503876
	[  +0.001168] FS-Cache: N-key=[8] '33e6c90000000000'
	[  +0.381527] FS-Cache: Duplicate cookie detected
	[  +0.000702] FS-Cache: O-cookie c=000000e0 [p=000000da fl=226 nc=0 na=1]
	[  +0.000996] FS-Cache: O-cookie d=00000000841d89c0{9p.inode} n=0000000072d896e1
	[  +0.001119] FS-Cache: O-key=[8] '39e6c90000000000'
	[  +0.000703] FS-Cache: N-cookie c=000000e7 [p=000000da fl=2 nc=0 na=1]
	[  +0.001153] FS-Cache: N-cookie d=00000000841d89c0{9p.inode} n=0000000043dc7f09
	[  +0.001179] FS-Cache: N-key=[8] '39e6c90000000000'
	
	
	==> etcd [8ab6f513b2b20886ab25499e434ed7753a393acbc291e19259eb933112bfee1a] <==
	raft2024/02/26 11:56:23 INFO: aec36adc501070cc became follower at term 0
	raft2024/02/26 11:56:23 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/02/26 11:56:23 INFO: aec36adc501070cc became follower at term 1
	raft2024/02/26 11:56:23 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-02-26 11:56:23.774724 W | auth: simple token is not cryptographically signed
	2024-02-26 11:56:24.021638 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-02-26 11:56:24.036985 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/02/26 11:56:24 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-02-26 11:56:24.076836 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-02-26 11:56:24.228308 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-02-26 11:56:24.268691 I | embed: listening for peers on 192.168.49.2:2380
	2024-02-26 11:56:24.269068 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/02/26 11:56:24 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/02/26 11:56:24 INFO: aec36adc501070cc became candidate at term 2
	raft2024/02/26 11:56:24 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/02/26 11:56:24 INFO: aec36adc501070cc became leader at term 2
	raft2024/02/26 11:56:24 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-02-26 11:56:24.755317 I | etcdserver: setting up the initial cluster version to 3.4
	2024-02-26 11:56:24.755487 I | etcdserver: published {Name:ingress-addon-legacy-329029 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-02-26 11:56:24.755501 I | embed: ready to serve client requests
	2024-02-26 11:56:24.757574 I | embed: serving client requests on 127.0.0.1:2379
	2024-02-26 11:56:24.757680 I | embed: ready to serve client requests
	2024-02-26 11:56:24.758974 I | embed: serving client requests on 192.168.49.2:2379
	2024-02-26 11:56:24.759637 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-02-26 11:56:24.759789 I | etcdserver/api: enabled capabilities for version 3.4
	
	
	==> kernel <==
	 12:00:16 up 1 day, 42 min,  0 users,  load average: 0.29, 0.95, 1.17
	Linux ingress-addon-legacy-329029 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [7840f3db6d79ad409a88445bfb4c702e31b6de6ad60872e296cd9703cb3bfed8] <==
	I0226 11:58:11.381865       1 main.go:227] handling current node
	I0226 11:58:21.392435       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:58:21.392464       1 main.go:227] handling current node
	I0226 11:58:31.404071       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:58:31.404101       1 main.go:227] handling current node
	I0226 11:58:41.415881       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:58:41.415911       1 main.go:227] handling current node
	I0226 11:58:51.419430       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:58:51.419461       1 main.go:227] handling current node
	I0226 11:59:01.422792       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:59:01.422823       1 main.go:227] handling current node
	I0226 11:59:11.428749       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:59:11.428780       1 main.go:227] handling current node
	I0226 11:59:21.432510       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:59:21.432539       1 main.go:227] handling current node
	I0226 11:59:31.444577       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:59:31.444605       1 main.go:227] handling current node
	I0226 11:59:41.455825       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:59:41.455853       1 main.go:227] handling current node
	I0226 11:59:51.461730       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 11:59:51.461754       1 main.go:227] handling current node
	I0226 12:00:01.466439       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 12:00:01.466577       1 main.go:227] handling current node
	I0226 12:00:11.477008       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0226 12:00:11.477036       1 main.go:227] handling current node
	
	
	==> kube-apiserver [1981d720d6c001e8fd2b9a1d08aa81af7e67410e4ce9a6a4b7ba19804035147d] <==
	I0226 11:56:28.952035       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
	I0226 11:56:28.952080       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0226 11:56:29.038648       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0226 11:56:29.043485       1 cache.go:39] Caches are synced for autoregister controller
	I0226 11:56:29.044029       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0226 11:56:29.044071       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0226 11:56:29.044103       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0226 11:56:29.815099       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0226 11:56:29.815135       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0226 11:56:29.821773       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0226 11:56:29.835254       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0226 11:56:29.835279       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0226 11:56:30.378408       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0226 11:56:30.414547       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0226 11:56:30.563374       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0226 11:56:30.564563       1 controller.go:609] quota admission added evaluator for: endpoints
	I0226 11:56:30.568661       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0226 11:56:31.258235       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0226 11:56:31.995408       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0226 11:56:32.126564       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0226 11:56:35.417659       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0226 11:56:47.446068       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0226 11:56:47.850409       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0226 11:57:05.997661       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0226 11:57:29.616810       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [7ac9be0ad74a0ccece646c35186013630398f6906320dff49458ac03feff84f8] <==
	I0226 11:56:47.867310       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0226 11:56:47.867657       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0226 11:56:47.885895       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-329029", UID:"e14f4bf7-73ee-449e-a30d-c7ef75dc36df", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-329029 event: Registered Node ingress-addon-legacy-329029 in Controller
	I0226 11:56:48.004887       1 shared_informer.go:230] Caches are synced for resource quota 
	I0226 11:56:48.006549       1 shared_informer.go:230] Caches are synced for attach detach 
	I0226 11:56:48.006786       1 shared_informer.go:230] Caches are synced for resource quota 
	I0226 11:56:48.040598       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0226 11:56:48.058595       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0226 11:56:48.059193       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"09f16997-c713-4da0-9f65-88d82e25424f", APIVersion:"apps/v1", ResourceVersion:"202", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-ccbz2
	I0226 11:56:48.089317       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"9120db3e-3180-4ab0-a852-3f9fc0c6c2a7", APIVersion:"apps/v1", ResourceVersion:"209", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-fjxth
	I0226 11:56:48.104509       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0226 11:56:48.105483       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0226 11:56:48.105640       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0226 11:56:48.279339       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"84e941fc-49a2-4295-8ed7-3e5dc10b302d", APIVersion:"apps/v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0226 11:56:48.625413       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"12eaeba7-6598-46ea-a2dc-3426005fc642", APIVersion:"apps/v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-r6rfn
	I0226 11:56:57.867767       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0226 11:57:05.990804       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"d01f8e9c-5d49-4a73-bce9-78918252d4a6", APIVersion:"apps/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0226 11:57:06.006057       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"13921da4-6974-468a-b1ee-9a1bd0b97749", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-dg9nd
	I0226 11:57:06.025169       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"88efec93-c485-404b-87a8-196ad5947f7f", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-x9npp
	I0226 11:57:06.097442       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"67465273-66b8-49f2-af24-767db80824d3", APIVersion:"batch/v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-bssq6
	I0226 11:57:08.532098       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"88efec93-c485-404b-87a8-196ad5947f7f", APIVersion:"batch/v1", ResourceVersion:"495", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0226 11:57:09.516763       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"67465273-66b8-49f2-af24-767db80824d3", APIVersion:"batch/v1", ResourceVersion:"505", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0226 11:59:49.711545       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"ed9285f6-29f9-4b38-8612-28a1fec2eb60", APIVersion:"apps/v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0226 11:59:49.735711       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"68e618e9-a9cd-410e-8c5c-aabd4183c8bd", APIVersion:"apps/v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-2frbj
	E0226 12:00:13.496189       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-4sf8l" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [b7a1a28c88b14d21a96a841d71538bdb1f0a7a8934236195996de02b350091b8] <==
	W0226 11:56:48.733643       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0226 11:56:48.747080       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0226 11:56:48.747118       1 server_others.go:186] Using iptables Proxier.
	I0226 11:56:48.747869       1 server.go:583] Version: v1.18.20
	I0226 11:56:48.751927       1 config.go:315] Starting service config controller
	I0226 11:56:48.751973       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0226 11:56:48.752298       1 config.go:133] Starting endpoints config controller
	I0226 11:56:48.752318       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0226 11:56:48.853418       1 shared_informer.go:230] Caches are synced for service config 
	I0226 11:56:48.853490       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [94fb9b698ba7bd16e2b417330f2369d75d252929e1bebbf816d3ea13e417b71a] <==
	I0226 11:56:29.032619       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0226 11:56:29.032737       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0226 11:56:29.034599       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0226 11:56:29.034826       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0226 11:56:29.034870       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0226 11:56:29.034916       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0226 11:56:29.051534       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0226 11:56:29.051750       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0226 11:56:29.052457       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0226 11:56:29.052570       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0226 11:56:29.051895       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0226 11:56:29.051983       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0226 11:56:29.052066       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0226 11:56:29.052138       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0226 11:56:29.052229       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0226 11:56:29.052292       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0226 11:56:29.052350       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0226 11:56:29.080124       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0226 11:56:29.989342       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0226 11:56:30.052711       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0226 11:56:30.077879       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0226 11:56:30.105612       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0226 11:56:30.318381       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0226 11:56:32.935071       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0226 11:56:48.787818       1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue
	
	
	==> kubelet <==
	Feb 26 11:59:53 ingress-addon-legacy-329029 kubelet[1633]: I0226 11:59:53.890657    1633 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 297c566520e38e12d9db9f497c2e391e3ac3d77b4ec10a4773362f14a6bb4882
	Feb 26 11:59:53 ingress-addon-legacy-329029 kubelet[1633]: I0226 11:59:53.890925    1633 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2738114086dc422585831488d0ce335f07e8761fc9c0918c9156abe52f13b44e
	Feb 26 11:59:53 ingress-addon-legacy-329029 kubelet[1633]: E0226 11:59:53.891168    1633 pod_workers.go:191] Error syncing pod 2184f57e-f746-4fbe-89e5-002ffc6cdcea ("hello-world-app-5f5d8b66bb-2frbj_default(2184f57e-f746-4fbe-89e5-002ffc6cdcea)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-2frbj_default(2184f57e-f746-4fbe-89e5-002ffc6cdcea)"
	Feb 26 11:59:54 ingress-addon-legacy-329029 kubelet[1633]: I0226 11:59:54.893532    1633 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2738114086dc422585831488d0ce335f07e8761fc9c0918c9156abe52f13b44e
	Feb 26 11:59:54 ingress-addon-legacy-329029 kubelet[1633]: E0226 11:59:54.893790    1633 pod_workers.go:191] Error syncing pod 2184f57e-f746-4fbe-89e5-002ffc6cdcea ("hello-world-app-5f5d8b66bb-2frbj_default(2184f57e-f746-4fbe-89e5-002ffc6cdcea)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-2frbj_default(2184f57e-f746-4fbe-89e5-002ffc6cdcea)"
	Feb 26 11:59:58 ingress-addon-legacy-329029 kubelet[1633]: E0226 11:59:58.436551    1633 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 26 11:59:58 ingress-addon-legacy-329029 kubelet[1633]: E0226 11:59:58.436603    1633 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 26 11:59:58 ingress-addon-legacy-329029 kubelet[1633]: E0226 11:59:58.436978    1633 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Feb 26 11:59:58 ingress-addon-legacy-329029 kubelet[1633]: E0226 11:59:58.437048    1633 pod_workers.go:191] Error syncing pod 1b68bad1-9ce5-4dbf-a269-1ad284c1c359 ("kube-ingress-dns-minikube_kube-system(1b68bad1-9ce5-4dbf-a269-1ad284c1c359)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Feb 26 12:00:05 ingress-addon-legacy-329029 kubelet[1633]: I0226 12:00:05.720342    1633 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-czh76" (UniqueName: "kubernetes.io/secret/1b68bad1-9ce5-4dbf-a269-1ad284c1c359-minikube-ingress-dns-token-czh76") pod "1b68bad1-9ce5-4dbf-a269-1ad284c1c359" (UID: "1b68bad1-9ce5-4dbf-a269-1ad284c1c359")
	Feb 26 12:00:05 ingress-addon-legacy-329029 kubelet[1633]: I0226 12:00:05.724241    1633 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b68bad1-9ce5-4dbf-a269-1ad284c1c359-minikube-ingress-dns-token-czh76" (OuterVolumeSpecName: "minikube-ingress-dns-token-czh76") pod "1b68bad1-9ce5-4dbf-a269-1ad284c1c359" (UID: "1b68bad1-9ce5-4dbf-a269-1ad284c1c359"). InnerVolumeSpecName "minikube-ingress-dns-token-czh76". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 26 12:00:05 ingress-addon-legacy-329029 kubelet[1633]: I0226 12:00:05.820711    1633 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-czh76" (UniqueName: "kubernetes.io/secret/1b68bad1-9ce5-4dbf-a269-1ad284c1c359-minikube-ingress-dns-token-czh76") on node "ingress-addon-legacy-329029" DevicePath ""
	Feb 26 12:00:06 ingress-addon-legacy-329029 kubelet[1633]: I0226 12:00:06.435716    1633 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2738114086dc422585831488d0ce335f07e8761fc9c0918c9156abe52f13b44e
	Feb 26 12:00:06 ingress-addon-legacy-329029 kubelet[1633]: I0226 12:00:06.918066    1633 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2738114086dc422585831488d0ce335f07e8761fc9c0918c9156abe52f13b44e
	Feb 26 12:00:06 ingress-addon-legacy-329029 kubelet[1633]: I0226 12:00:06.918320    1633 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 51872b857bd57ce1b0fd15f2121e3ac340c475955444beecfc411002bc79104d
	Feb 26 12:00:06 ingress-addon-legacy-329029 kubelet[1633]: E0226 12:00:06.918579    1633 pod_workers.go:191] Error syncing pod 2184f57e-f746-4fbe-89e5-002ffc6cdcea ("hello-world-app-5f5d8b66bb-2frbj_default(2184f57e-f746-4fbe-89e5-002ffc6cdcea)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-2frbj_default(2184f57e-f746-4fbe-89e5-002ffc6cdcea)"
	Feb 26 12:00:08 ingress-addon-legacy-329029 kubelet[1633]: E0226 12:00:08.824253    1633 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dg9nd.17b767e3211bfec7", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dg9nd", UID:"58251f23-1560-4e54-a84e-89e7f38466f8", APIVersion:"v1", ResourceVersion:"484", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-329029"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16f3d3230f12ec7, ext:216888129094, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16f3d3230f12ec7, ext:216888129094, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dg9nd.17b767e3211bfec7" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 26 12:00:08 ingress-addon-legacy-329029 kubelet[1633]: E0226 12:00:08.844053    1633 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dg9nd.17b767e3211bfec7", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dg9nd", UID:"58251f23-1560-4e54-a84e-89e7f38466f8", APIVersion:"v1", ResourceVersion:"484", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-329029"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16f3d3230f12ec7, ext:216888129094, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16f3d3231dc02cf, ext:216903518790, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dg9nd.17b767e3211bfec7" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 26 12:00:11 ingress-addon-legacy-329029 kubelet[1633]: W0226 12:00:11.928531    1633 pod_container_deletor.go:77] Container "d9304154bf244490d7079d2806125566e6d4e389ececf704f24b811b957f1b69" not found in pod's containers
	Feb 26 12:00:12 ingress-addon-legacy-329029 kubelet[1633]: I0226 12:00:12.937261    1633 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/58251f23-1560-4e54-a84e-89e7f38466f8-webhook-cert") pod "58251f23-1560-4e54-a84e-89e7f38466f8" (UID: "58251f23-1560-4e54-a84e-89e7f38466f8")
	Feb 26 12:00:12 ingress-addon-legacy-329029 kubelet[1633]: I0226 12:00:12.937322    1633 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-wr4q5" (UniqueName: "kubernetes.io/secret/58251f23-1560-4e54-a84e-89e7f38466f8-ingress-nginx-token-wr4q5") pod "58251f23-1560-4e54-a84e-89e7f38466f8" (UID: "58251f23-1560-4e54-a84e-89e7f38466f8")
	Feb 26 12:00:12 ingress-addon-legacy-329029 kubelet[1633]: I0226 12:00:12.943675    1633 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58251f23-1560-4e54-a84e-89e7f38466f8-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "58251f23-1560-4e54-a84e-89e7f38466f8" (UID: "58251f23-1560-4e54-a84e-89e7f38466f8"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 26 12:00:12 ingress-addon-legacy-329029 kubelet[1633]: I0226 12:00:12.944092    1633 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58251f23-1560-4e54-a84e-89e7f38466f8-ingress-nginx-token-wr4q5" (OuterVolumeSpecName: "ingress-nginx-token-wr4q5") pod "58251f23-1560-4e54-a84e-89e7f38466f8" (UID: "58251f23-1560-4e54-a84e-89e7f38466f8"). InnerVolumeSpecName "ingress-nginx-token-wr4q5". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 26 12:00:13 ingress-addon-legacy-329029 kubelet[1633]: I0226 12:00:13.037707    1633 reconciler.go:319] Volume detached for volume "ingress-nginx-token-wr4q5" (UniqueName: "kubernetes.io/secret/58251f23-1560-4e54-a84e-89e7f38466f8-ingress-nginx-token-wr4q5") on node "ingress-addon-legacy-329029" DevicePath ""
	Feb 26 12:00:13 ingress-addon-legacy-329029 kubelet[1633]: I0226 12:00:13.037781    1633 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/58251f23-1560-4e54-a84e-89e7f38466f8-webhook-cert") on node "ingress-addon-legacy-329029" DevicePath ""
	
	
	==> storage-provisioner [34a97a7b199bc0f0e4c73c6ae8409081800c594ac15d42c753a923f0bfb3e013] <==
	I0226 11:56:57.803213       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0226 11:56:57.818832       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0226 11:56:57.819130       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0226 11:56:57.826653       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0226 11:56:57.827056       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b233ce92-7490-4b5b-9edd-f41b5aa731ac", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-329029_88bdce58-5e05-4d37-a00c-4e80cf368bae became leader
	I0226 11:56:57.827134       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-329029_88bdce58-5e05-4d37-a00c-4e80cf368bae!
	I0226 11:56:57.928116       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-329029_88bdce58-5e05-4d37-a00c-4e80cf368bae!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-329029 -n ingress-addon-legacy-329029
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-329029 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (179.85s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (122.74s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-534129 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0226 12:24:41.616189  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-534129 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m55.295305318s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-534129] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-534129 in cluster pause-534129
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Updating the running docker "pause-534129" container ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-534129" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 12:23:50.081367  742422 out.go:291] Setting OutFile to fd 1 ...
	I0226 12:23:50.081597  742422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 12:23:50.081627  742422 out.go:304] Setting ErrFile to fd 2...
	I0226 12:23:50.081726  742422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 12:23:50.082028  742422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
	I0226 12:23:50.082541  742422 out.go:298] Setting JSON to false
	I0226 12:23:50.083757  742422 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":90376,"bootTime":1708859854,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0226 12:23:50.083890  742422 start.go:139] virtualization:  
	I0226 12:23:50.086584  742422 out.go:177] * [pause-534129] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0226 12:23:50.088907  742422 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 12:23:50.090761  742422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 12:23:50.088988  742422 notify.go:220] Checking for updates...
	I0226 12:23:50.093176  742422 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 12:23:50.094961  742422 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	I0226 12:23:50.097007  742422 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0226 12:23:50.098900  742422 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 12:23:50.101357  742422 config.go:182] Loaded profile config "pause-534129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 12:23:50.101944  742422 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 12:23:50.127239  742422 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0226 12:23:50.127375  742422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 12:23:50.204844  742422 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:66 SystemTime:2024-02-26 12:23:50.192913028 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 12:23:50.204955  742422 docker.go:295] overlay module found
	I0226 12:23:50.208919  742422 out.go:177] * Using the docker driver based on existing profile
	I0226 12:23:50.210822  742422 start.go:299] selected driver: docker
	I0226 12:23:50.210839  742422 start.go:903] validating driver "docker" against &{Name:pause-534129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-534129 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 12:23:50.210992  742422 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 12:23:50.211096  742422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 12:23:50.264832  742422 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:66 SystemTime:2024-02-26 12:23:50.255308152 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 12:23:50.265306  742422 cni.go:84] Creating CNI manager for ""
	I0226 12:23:50.265324  742422 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 12:23:50.265335  742422 start_flags.go:323] config:
	{Name:pause-534129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-534129 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false s
torage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 12:23:50.267511  742422 out.go:177] * Starting control plane node pause-534129 in cluster pause-534129
	I0226 12:23:50.269420  742422 cache.go:121] Beginning downloading kic base image for docker with crio
	I0226 12:23:50.271175  742422 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 12:23:50.273211  742422 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0226 12:23:50.273274  742422 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0226 12:23:50.273281  742422 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 12:23:50.273286  742422 cache.go:56] Caching tarball of preloaded images
	I0226 12:23:50.273417  742422 preload.go:174] Found /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0226 12:23:50.273427  742422 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0226 12:23:50.273562  742422 profile.go:148] Saving config to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/config.json ...
	I0226 12:23:50.289910  742422 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 12:23:50.289947  742422 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 12:23:50.289967  742422 cache.go:194] Successfully downloaded all kic artifacts
	I0226 12:23:50.289995  742422 start.go:365] acquiring machines lock for pause-534129: {Name:mkcccf4092789a125cee80b74fc27778123830a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 12:23:50.290069  742422 start.go:369] acquired machines lock for "pause-534129" in 42.632µs
	I0226 12:23:50.290095  742422 start.go:96] Skipping create...Using existing machine configuration
	I0226 12:23:50.290443  742422 fix.go:54] fixHost starting: 
	I0226 12:23:50.290763  742422 cli_runner.go:164] Run: docker container inspect pause-534129 --format={{.State.Status}}
	I0226 12:23:50.314611  742422 fix.go:102] recreateIfNeeded on pause-534129: state=Running err=<nil>
	W0226 12:23:50.314650  742422 fix.go:128] unexpected machine state, will restart: <nil>
	I0226 12:23:50.316750  742422 out.go:177] * Updating the running docker "pause-534129" container ...
	I0226 12:23:50.318404  742422 machine.go:88] provisioning docker machine ...
	I0226 12:23:50.318442  742422 ubuntu.go:169] provisioning hostname "pause-534129"
	I0226 12:23:50.318517  742422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-534129
	I0226 12:23:50.334608  742422 main.go:141] libmachine: Using SSH client type: native
	I0226 12:23:50.335035  742422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 36996 <nil> <nil>}
	I0226 12:23:50.335052  742422 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-534129 && echo "pause-534129" | sudo tee /etc/hostname
	I0226 12:23:50.490860  742422 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-534129
	
	I0226 12:23:50.490948  742422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-534129
	I0226 12:23:50.517094  742422 main.go:141] libmachine: Using SSH client type: native
	I0226 12:23:50.517353  742422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 36996 <nil> <nil>}
	I0226 12:23:50.517376  742422 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-534129' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-534129/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-534129' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 12:23:50.669971  742422 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 12:23:50.670058  742422 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18222-608626/.minikube CaCertPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18222-608626/.minikube}
	I0226 12:23:50.670093  742422 ubuntu.go:177] setting up certificates
	I0226 12:23:50.670145  742422 provision.go:83] configureAuth start
	I0226 12:23:50.670238  742422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-534129
	I0226 12:23:50.690280  742422 provision.go:138] copyHostCerts
	I0226 12:23:50.690378  742422 exec_runner.go:144] found /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem, removing ...
	I0226 12:23:50.690392  742422 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem
	I0226 12:23:50.690495  742422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem (1082 bytes)
	I0226 12:23:50.690742  742422 exec_runner.go:144] found /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem, removing ...
	I0226 12:23:50.690766  742422 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem
	I0226 12:23:50.690806  742422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem (1123 bytes)
	I0226 12:23:50.690912  742422 exec_runner.go:144] found /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem, removing ...
	I0226 12:23:50.690928  742422 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem
	I0226 12:23:50.690977  742422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem (1679 bytes)
	I0226 12:23:50.691058  742422 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca-key.pem org=jenkins.pause-534129 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-534129]
	I0226 12:23:51.929608  742422 provision.go:172] copyRemoteCerts
	I0226 12:23:51.929721  742422 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 12:23:51.929786  742422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-534129
	I0226 12:23:51.945968  742422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36996 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/pause-534129/id_rsa Username:docker}
	I0226 12:23:52.045960  742422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0226 12:23:52.072107  742422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0226 12:23:52.098348  742422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0226 12:23:52.122790  742422 provision.go:86] duration metric: configureAuth took 1.452617009s
	I0226 12:23:52.122868  742422 ubuntu.go:193] setting minikube options for container-runtime
	I0226 12:23:52.123110  742422 config.go:182] Loaded profile config "pause-534129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 12:23:52.123229  742422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-534129
	I0226 12:23:52.139382  742422 main.go:141] libmachine: Using SSH client type: native
	I0226 12:23:52.139633  742422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 36996 <nil> <nil>}
	I0226 12:23:52.139659  742422 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0226 12:23:57.599227  742422 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0226 12:23:57.599253  742422 machine.go:91] provisioned docker machine in 7.280821704s
	I0226 12:23:57.599265  742422 start.go:300] post-start starting for "pause-534129" (driver="docker")
	I0226 12:23:57.599277  742422 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 12:23:57.599350  742422 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 12:23:57.599395  742422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-534129
	I0226 12:23:57.626384  742422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36996 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/pause-534129/id_rsa Username:docker}
	I0226 12:23:57.727097  742422 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 12:23:57.731291  742422 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 12:23:57.731338  742422 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 12:23:57.731349  742422 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 12:23:57.731356  742422 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 12:23:57.731366  742422 filesync.go:126] Scanning /home/jenkins/minikube-integration/18222-608626/.minikube/addons for local assets ...
	I0226 12:23:57.731427  742422 filesync.go:126] Scanning /home/jenkins/minikube-integration/18222-608626/.minikube/files for local assets ...
	I0226 12:23:57.731527  742422 filesync.go:149] local asset: /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem -> 6139882.pem in /etc/ssl/certs
	I0226 12:23:57.731641  742422 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 12:23:57.742942  742422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem --> /etc/ssl/certs/6139882.pem (1708 bytes)
	I0226 12:23:57.777757  742422 start.go:303] post-start completed in 178.476628ms
	I0226 12:23:57.777870  742422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 12:23:57.777949  742422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-534129
	I0226 12:23:57.800811  742422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36996 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/pause-534129/id_rsa Username:docker}
	I0226 12:23:57.902258  742422 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 12:23:57.907732  742422 fix.go:56] fixHost completed within 7.617284968s
	I0226 12:23:57.907756  742422 start.go:83] releasing machines lock for "pause-534129", held for 7.617673403s
	I0226 12:23:57.907838  742422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-534129
	I0226 12:23:57.928629  742422 ssh_runner.go:195] Run: cat /version.json
	I0226 12:23:57.928794  742422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-534129
	I0226 12:23:57.928892  742422 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 12:23:57.928931  742422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-534129
	I0226 12:23:57.964167  742422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36996 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/pause-534129/id_rsa Username:docker}
	I0226 12:23:57.985606  742422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36996 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/pause-534129/id_rsa Username:docker}
	I0226 12:23:58.214578  742422 ssh_runner.go:195] Run: systemctl --version
	I0226 12:23:58.219581  742422 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0226 12:23:58.398146  742422 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 12:23:58.402651  742422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0226 12:23:58.411752  742422 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0226 12:23:58.411842  742422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0226 12:23:58.420761  742422 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0226 12:23:58.420788  742422 start.go:475] detecting cgroup driver to use...
	I0226 12:23:58.420849  742422 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0226 12:23:58.420912  742422 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0226 12:23:58.433660  742422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0226 12:23:58.445226  742422 docker.go:217] disabling cri-docker service (if available) ...
	I0226 12:23:58.445350  742422 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0226 12:23:58.459007  742422 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0226 12:23:58.471671  742422 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0226 12:23:58.589295  742422 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0226 12:23:58.719571  742422 docker.go:233] disabling docker service ...
	I0226 12:23:58.719693  742422 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0226 12:23:58.733685  742422 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0226 12:23:58.745092  742422 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0226 12:23:58.857448  742422 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0226 12:23:58.979837  742422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0226 12:23:58.997057  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 12:23:59.015951  742422 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0226 12:23:59.016052  742422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 12:23:59.026659  742422 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0226 12:23:59.026747  742422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 12:23:59.037095  742422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 12:23:59.046829  742422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 12:23:59.056607  742422 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 12:23:59.066456  742422 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 12:23:59.075243  742422 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 12:23:59.083920  742422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 12:23:59.195666  742422 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0226 12:23:59.451333  742422 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0226 12:23:59.451416  742422 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0226 12:23:59.455199  742422 start.go:543] Will wait 60s for crictl version
	I0226 12:23:59.455260  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:23:59.458565  742422 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0226 12:23:59.498747  742422 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0226 12:23:59.498871  742422 ssh_runner.go:195] Run: crio --version
	I0226 12:23:59.537659  742422 ssh_runner.go:195] Run: crio --version
	I0226 12:23:59.578048  742422 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0226 12:23:59.579878  742422 cli_runner.go:164] Run: docker network inspect pause-534129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 12:23:59.598327  742422 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0226 12:23:59.602954  742422 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0226 12:23:59.603060  742422 ssh_runner.go:195] Run: sudo crictl images --output json
	I0226 12:23:59.652431  742422 crio.go:496] all images are preloaded for cri-o runtime.
	I0226 12:23:59.652466  742422 crio.go:415] Images already preloaded, skipping extraction
	I0226 12:23:59.652537  742422 ssh_runner.go:195] Run: sudo crictl images --output json
	I0226 12:23:59.689260  742422 crio.go:496] all images are preloaded for cri-o runtime.
	I0226 12:23:59.689285  742422 cache_images.go:84] Images are preloaded, skipping loading
	I0226 12:23:59.689364  742422 ssh_runner.go:195] Run: crio config
	I0226 12:23:59.735834  742422 cni.go:84] Creating CNI manager for ""
	I0226 12:23:59.735858  742422 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 12:23:59.735878  742422 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 12:23:59.735900  742422 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-534129 NodeName:pause-534129 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0226 12:23:59.736042  742422 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-534129"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 12:23:59.736112  742422 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-534129 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-534129 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 12:23:59.736181  742422 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0226 12:23:59.744872  742422 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 12:23:59.745021  742422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 12:23:59.753454  742422 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0226 12:23:59.771359  742422 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 12:23:59.789556  742422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0226 12:23:59.807436  742422 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0226 12:23:59.811046  742422 certs.go:56] Setting up /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129 for IP: 192.168.76.2
	I0226 12:23:59.811076  742422 certs.go:190] acquiring lock for shared ca certs: {Name:mkc71f6ba94614715b3b8dc8b06b5f59e5f1adfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:23:59.811233  742422 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.key
	I0226 12:23:59.811279  742422 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.key
	I0226 12:23:59.811356  742422 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/client.key
	I0226 12:23:59.811427  742422 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/apiserver.key.31bdca25
	I0226 12:23:59.811468  742422 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/proxy-client.key
	I0226 12:23:59.811585  742422 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/613988.pem (1338 bytes)
	W0226 12:23:59.811619  742422 certs.go:433] ignoring /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/613988_empty.pem, impossibly tiny 0 bytes
	I0226 12:23:59.811632  742422 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca-key.pem (1675 bytes)
	I0226 12:23:59.811664  742422 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem (1082 bytes)
	I0226 12:23:59.811693  742422 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem (1123 bytes)
	I0226 12:23:59.811717  742422 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem (1679 bytes)
	I0226 12:23:59.811765  742422 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem (1708 bytes)
	I0226 12:23:59.812409  742422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 12:23:59.836088  742422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0226 12:23:59.860715  742422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 12:23:59.885233  742422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0226 12:23:59.909066  742422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 12:23:59.933297  742422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0226 12:23:59.958574  742422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 12:23:59.983377  742422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0226 12:24:00.019546  742422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/certs/613988.pem --> /usr/share/ca-certificates/613988.pem (1338 bytes)
	I0226 12:24:00.121596  742422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem --> /usr/share/ca-certificates/6139882.pem (1708 bytes)
	I0226 12:24:00.221732  742422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 12:24:00.262258  742422 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 12:24:00.308287  742422 ssh_runner.go:195] Run: openssl version
	I0226 12:24:00.316105  742422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/613988.pem && ln -fs /usr/share/ca-certificates/613988.pem /etc/ssl/certs/613988.pem"
	I0226 12:24:00.356285  742422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/613988.pem
	I0226 12:24:00.362219  742422 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 11:52 /usr/share/ca-certificates/613988.pem
	I0226 12:24:00.362352  742422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/613988.pem
	I0226 12:24:00.385955  742422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/613988.pem /etc/ssl/certs/51391683.0"
	I0226 12:24:00.399842  742422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6139882.pem && ln -fs /usr/share/ca-certificates/6139882.pem /etc/ssl/certs/6139882.pem"
	I0226 12:24:00.426954  742422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6139882.pem
	I0226 12:24:00.433403  742422 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 11:52 /usr/share/ca-certificates/6139882.pem
	I0226 12:24:00.433539  742422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6139882.pem
	I0226 12:24:00.442217  742422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6139882.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 12:24:00.454152  742422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 12:24:00.467082  742422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 12:24:00.471446  742422 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 11:45 /usr/share/ca-certificates/minikubeCA.pem
	I0226 12:24:00.471535  742422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 12:24:00.480170  742422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 12:24:00.490988  742422 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 12:24:00.495471  742422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0226 12:24:00.503363  742422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0226 12:24:00.512436  742422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0226 12:24:00.520241  742422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0226 12:24:00.528379  742422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0226 12:24:00.536188  742422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0226 12:24:00.543633  742422 kubeadm.go:404] StartCluster: {Name:pause-534129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-534129 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-al
iases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 12:24:00.543791  742422 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0226 12:24:00.543881  742422 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0226 12:24:00.584102  742422 cri.go:89] found id: "5152ba0b726063b0f6990e24c0b6d98a30a1bc360b9d99690ca99ac165a00599"
	I0226 12:24:00.584127  742422 cri.go:89] found id: "b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1"
	I0226 12:24:00.584133  742422 cri.go:89] found id: "479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7"
	I0226 12:24:00.584137  742422 cri.go:89] found id: "7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0"
	I0226 12:24:00.584141  742422 cri.go:89] found id: "12de2891c734039891518f928b495affe6cddeedd038c4798071ec81470bfd1d"
	I0226 12:24:00.584145  742422 cri.go:89] found id: "46f68cd5dd2a2b28f43fa34abe8c41cb4e389d0385ac597a78eac4a057b3e442"
	I0226 12:24:00.584148  742422 cri.go:89] found id: "33889c34d2e73bc4fa847a38bdff924ad652989e35cc952e04d56d1a137192a8"
	I0226 12:24:00.584152  742422 cri.go:89] found id: ""
	I0226 12:24:00.584205  742422 ssh_runner.go:195] Run: sudo runc list -f json
	I0226 12:24:00.607160  742422 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"12de2891c734039891518f928b495affe6cddeedd038c4798071ec81470bfd1d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/12de2891c734039891518f928b495affe6cddeedd038c4798071ec81470bfd1d/userdata","rootfs":"/var/lib/containers/storage/overlay/c31b58386b4b0c698d6d37c7ba64116e074df7890fc609f4688c074b9838b36b/merged","created":"2024-02-26T12:23:13.702275811Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c95a9554","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c95a9554\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"12de2891c734039891518f928b495affe6cddeedd038c4798071ec81470bfd1d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-26T12:23:13.557383657Z","io.kubernetes.cri-o.Image":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri-o.ImageRef":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-534129\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"00b68e4c0768a69acca248efe5f1fd60\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-534129_00b68e4c0768a69acca248efe5f1fd60/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kube
rnetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c31b58386b4b0c698d6d37c7ba64116e074df7890fc609f4688c074b9838b36b/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-534129_kube-system_00b68e4c0768a69acca248efe5f1fd60_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/6dda202a706c625907dabfb7013a6e495bc4f6eaa1ff4370471517c6296cafb2/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6dda202a706c625907dabfb7013a6e495bc4f6eaa1ff4370471517c6296cafb2","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-534129_kube-system_00b68e4c0768a69acca248efe5f1fd60_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/00b68e4c0768a69acca248efe5f1fd60/containers/kube-apiserver/f58a5e6c\",\"readonly\":false,\"propagation\":0,\"selinux_re
label\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/00b68e4c0768a69acca248efe5f1fd60/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-534129","io.k
ubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"00b68e4c0768a69acca248efe5f1fd60","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.76.2:8443","kubernetes.io/config.hash":"00b68e4c0768a69acca248efe5f1fd60","kubernetes.io/config.seen":"2024-02-26T12:23:12.916455606Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"33889c34d2e73bc4fa847a38bdff924ad652989e35cc952e04d56d1a137192a8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/33889c34d2e73bc4fa847a38bdff924ad652989e35cc952e04d56d1a137192a8/userdata","rootfs":"/var/lib/containers/storage/overlay/f57c5ed37c702f96f03ba822cdc5b89215644cf08fe682d674431bc591ea4eb2/merged","created":"2024-02-26T12:23:13.634788562Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b60ddd3e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kuberne
tes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b60ddd3e\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"33889c34d2e73bc4fa847a38bdff924ad652989e35cc952e04d56d1a137192a8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-26T12:23:13.485833196Z","io.kubernetes.cri-o.Image":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.4","io.kubernetes.cri-o.ImageRef":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.
kubernetes.pod.name\":\"kube-controller-manager-pause-534129\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e87490c73aa544fd7b73853c2ddd5f1f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-534129_e87490c73aa544fd7b73853c2ddd5f1f/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f57c5ed37c702f96f03ba822cdc5b89215644cf08fe682d674431bc591ea4eb2/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-534129_kube-system_e87490c73aa544fd7b73853c2ddd5f1f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/9115a7a998ae7196c36a091a4c1172e6906670b4ccc1fb60d5f4e94576dacd55/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9115a7a998ae7196c36a091a4c1172e6906670b4ccc1fb60d5f4e94576dacd55","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-53
4129_kube-system_e87490c73aa544fd7b73853c2ddd5f1f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e87490c73aa544fd7b73853c2ddd5f1f/containers/kube-controller-manager/9c475255\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e87490c73aa544fd7b73853c2ddd5f1f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes
/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-534129","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e87490c73aa544fd7b73853c2ddd5f1f","kubernetes.io/config.ha
sh":"e87490c73aa544fd7b73853c2ddd5f1f","kubernetes.io/config.seen":"2024-02-26T12:23:12.916456861Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46f68cd5dd2a2b28f43fa34abe8c41cb4e389d0385ac597a78eac4a057b3e442","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/46f68cd5dd2a2b28f43fa34abe8c41cb4e389d0385ac597a78eac4a057b3e442/userdata","rootfs":"/var/lib/containers/storage/overlay/a09c268c6a25db64ee0bb77fe1f4f88d416cc6d99cb3895620dc6b2ea3dca8a1/merged","created":"2024-02-26T12:23:13.630912815Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"32d33137","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"32d33137\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.t
erminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"46f68cd5dd2a2b28f43fa34abe8c41cb4e389d0385ac597a78eac4a057b3e442","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-26T12:23:13.520856387Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-534129\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"07f1f43b7b96f39381b2c5b71a8392b5\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-534129_07f1f43b7b96f39381b2c5b71a8392b5/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"n
ame\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a09c268c6a25db64ee0bb77fe1f4f88d416cc6d99cb3895620dc6b2ea3dca8a1/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-534129_kube-system_07f1f43b7b96f39381b2c5b71a8392b5_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ebb1f1255f0ab449a4d2cf12dfceba68c5225e56bb2a09b0af5154e4a02fcf4f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ebb1f1255f0ab449a4d2cf12dfceba68c5225e56bb2a09b0af5154e4a02fcf4f","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-534129_kube-system_07f1f43b7b96f39381b2c5b71a8392b5_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/07f1f43b7b96f39381b2c5b71a8392b5/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/de
v/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/07f1f43b7b96f39381b2c5b71a8392b5/containers/etcd/a8a7255e\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-534129","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"07f1f43b7b96f39381b2c5b71a8392b5","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.76.2:2379","kubernetes.io/config.hash":"07f1f43b7b96f39381b2c5b71a8392b5","kubernetes.io/config.seen":"2024-02-26T12:23:12.916454088Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"479bd7603543de88b10da38c411af444380df23
9c314df144c8b43df56a90ee7","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7/userdata","rootfs":"/var/lib/containers/storage/overlay/27edf64ce049f22345cbceda47ac82dcae692c36cf9555e3e26e33ad5c6387a5/merged","created":"2024-02-26T12:23:36.159277967Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b32cb60d","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b32cb60d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"479bd7603543de88b10
da38c411af444380df239c314df144c8b43df56a90ee7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-26T12:23:36.066873883Z","io.kubernetes.cri-o.Image":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.4","io.kubernetes.cri-o.ImageRef":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-6stnr\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ddf2146c-15dd-4280-b05f-6476a69b62a2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-6stnr_ddf2146c-15dd-4280-b05f-6476a69b62a2/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/27edf64ce049f22345cbceda47ac82dcae692c36cf9555e3e26e33ad5c6387a5/merged","io.kubernetes.cri-o.Name":"k
8s_kube-proxy_kube-proxy-6stnr_kube-system_ddf2146c-15dd-4280-b05f-6476a69b62a2_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4890087dcd06e2904a0ead8c51df8de8dd8e008fd052dab872fcfbbfd2fc5ea2/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4890087dcd06e2904a0ead8c51df8de8dd8e008fd052dab872fcfbbfd2fc5ea2","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-6stnr_kube-system_ddf2146c-15dd-4280-b05f-6476a69b62a2_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ddf2146c-15dd-4280-b05f-6476a69b62a2/e
tc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ddf2146c-15dd-4280-b05f-6476a69b62a2/containers/kube-proxy/0a16801f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/ddf2146c-15dd-4280-b05f-6476a69b62a2/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/ddf2146c-15dd-4280-b05f-6476a69b62a2/volumes/kubernetes.io~projected/kube-api-access-bdgjt\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-6stnr","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ddf2146c-15dd-4280-b05f-6476a69b62a2","kubernetes.io/config.seen":"2024-02-26T
12:23:35.677977492Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5152ba0b726063b0f6990e24c0b6d98a30a1bc360b9d99690ca99ac165a00599","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/5152ba0b726063b0f6990e24c0b6d98a30a1bc360b9d99690ca99ac165a00599/userdata","rootfs":"/var/lib/containers/storage/overlay/c9564d89a55be297b49995ac8664d0f3fcc210371089e730a17ceaba36951903/merged","created":"2024-02-26T12:23:38.776096944Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3cbb1b8c","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.
kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3cbb1b8c\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5152ba0b726063b0f6990e24c0b6d98a30a1bc360b9d99690ca99ac165a00599","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-26T12:23:38.746829545Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1
.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-jphcc\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1389dc8e-2557-486e-be8a-598958aa8372\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-jphcc_1389dc8e-2557-486e-be8a-598958aa8372/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c9564d89a55be297b49995ac8664d0f3fcc210371089e730a17ceaba36951903/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-jphcc_kube-system_1389dc8e-2557-486e-be8a-598958aa8372_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/6274b942ec4cf388887e6fc3f9e3a07398b71d79d205989e0d21c7ba374cd356/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6274b942ec4cf388887e6fc
3f9e3a07398b71d79d205989e0d21c7ba374cd356","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-jphcc_kube-system_1389dc8e-2557-486e-be8a-598958aa8372_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/1389dc8e-2557-486e-be8a-598958aa8372/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1389dc8e-2557-486e-be8a-598958aa8372/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1389dc8e-2557-486e-be8a-598958aa8372/containers/coredns/a4c5185c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/ser
viceaccount\",\"host_path\":\"/var/lib/kubelet/pods/1389dc8e-2557-486e-be8a-598958aa8372/volumes/kubernetes.io~projected/kube-api-access-vmhd7\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-jphcc","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1389dc8e-2557-486e-be8a-598958aa8372","kubernetes.io/config.seen":"2024-02-26T12:23:38.382614586Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0/userdata","rootfs":"/var/lib/containers/storage/overlay/faf42b402daa71a7ad6c03d77c2b1e859ac58c94ccc76f391a5cb0d77bf02071/merged","created":"2024-02-26T12:23:13.717255549Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"
e1639c7a","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e1639c7a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-26T12:23:13.557969715Z","io.kubernetes.cri-o.Image":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.4","io.kubernetes.cri-o.ImageRef":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d
2c54","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-534129\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"630b5db601f14e02b490489c47f27f89\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-534129_630b5db601f14e02b490489c47f27f89/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/faf42b402daa71a7ad6c03d77c2b1e859ac58c94ccc76f391a5cb0d77bf02071/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-534129_kube-system_630b5db601f14e02b490489c47f27f89_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/df4f7a419dbae6575d7813bdd6f52ee290f3719262686beb8ca7d2cffd0ff374/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"df4f7a419dbae6575d7813bdd6f52ee290f3719262686beb8ca7d2cffd0ff374","io.kubernetes.cri-o.SandboxNam
e":"k8s_kube-scheduler-pause-534129_kube-system_630b5db601f14e02b490489c47f27f89_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/630b5db601f14e02b490489c47f27f89/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/630b5db601f14e02b490489c47f27f89/containers/kube-scheduler/a94c84ac\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-534129","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"630b5db601f14e
02b490489c47f27f89","kubernetes.io/config.hash":"630b5db601f14e02b490489c47f27f89","kubernetes.io/config.seen":"2024-02-26T12:23:12.916448328Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1/userdata","rootfs":"/var/lib/containers/storage/overlay/1c53409ef6bf6ae5ca73c6f4aecc1c929ffef78c911c9a9c1cfe0eaf9e140f23/merged","created":"2024-02-26T12:23:37.773705423Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85013786","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85013786\",\"io.kubernetes.contain
er.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-02-26T12:23:37.740965005Z","io.kubernetes.cri-o.Image":"docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20240202-8f1494ea","io.kubernetes.cri-o.ImageRef":"4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-zgq8r\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"10aa1f57-33c0-4f80-b9dc-ac083e1b47c3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/ku
be-system_kindnet-zgq8r_10aa1f57-33c0-4f80-b9dc-ac083e1b47c3/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1c53409ef6bf6ae5ca73c6f4aecc1c929ffef78c911c9a9c1cfe0eaf9e140f23/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-zgq8r_kube-system_10aa1f57-33c0-4f80-b9dc-ac083e1b47c3_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/40c0bd9f28028489aba091a7e8f01f2b502f5841fc9747912bca32eda034b701/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"40c0bd9f28028489aba091a7e8f01f2b502f5841fc9747912bca32eda034b701","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-zgq8r_kube-system_10aa1f57-33c0-4f80-b9dc-ac083e1b47c3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\"
,\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/10aa1f57-33c0-4f80-b9dc-ac083e1b47c3/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/10aa1f57-33c0-4f80-b9dc-ac083e1b47c3/containers/kindnet-cni/4462cd7a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/10aa1f57-33c0-4f80-b9dc-ac083e1b47c3/volumes/kubernetes.io~projected/kube-api-access-58rcj\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.nam
e":"kindnet-zgq8r","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"10aa1f57-33c0-4f80-b9dc-ac083e1b47c3","kubernetes.io/config.seen":"2024-02-26T12:23:35.652226049Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0226 12:24:00.607718  742422 cri.go:126] list returned 7 containers
	I0226 12:24:00.607733  742422 cri.go:129] container: {ID:12de2891c734039891518f928b495affe6cddeedd038c4798071ec81470bfd1d Status:stopped}
	I0226 12:24:00.607749  742422 cri.go:135] skipping {12de2891c734039891518f928b495affe6cddeedd038c4798071ec81470bfd1d stopped}: state = "stopped", want "paused"
	I0226 12:24:00.607762  742422 cri.go:129] container: {ID:33889c34d2e73bc4fa847a38bdff924ad652989e35cc952e04d56d1a137192a8 Status:stopped}
	I0226 12:24:00.607768  742422 cri.go:135] skipping {33889c34d2e73bc4fa847a38bdff924ad652989e35cc952e04d56d1a137192a8 stopped}: state = "stopped", want "paused"
	I0226 12:24:00.607775  742422 cri.go:129] container: {ID:46f68cd5dd2a2b28f43fa34abe8c41cb4e389d0385ac597a78eac4a057b3e442 Status:stopped}
	I0226 12:24:00.607783  742422 cri.go:135] skipping {46f68cd5dd2a2b28f43fa34abe8c41cb4e389d0385ac597a78eac4a057b3e442 stopped}: state = "stopped", want "paused"
	I0226 12:24:00.607790  742422 cri.go:129] container: {ID:479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7 Status:stopped}
	I0226 12:24:00.607795  742422 cri.go:135] skipping {479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7 stopped}: state = "stopped", want "paused"
	I0226 12:24:00.607805  742422 cri.go:129] container: {ID:5152ba0b726063b0f6990e24c0b6d98a30a1bc360b9d99690ca99ac165a00599 Status:stopped}
	I0226 12:24:00.607821  742422 cri.go:135] skipping {5152ba0b726063b0f6990e24c0b6d98a30a1bc360b9d99690ca99ac165a00599 stopped}: state = "stopped", want "paused"
	I0226 12:24:00.607827  742422 cri.go:129] container: {ID:7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0 Status:stopped}
	I0226 12:24:00.607834  742422 cri.go:135] skipping {7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0 stopped}: state = "stopped", want "paused"
	I0226 12:24:00.607846  742422 cri.go:129] container: {ID:b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1 Status:stopped}
	I0226 12:24:00.607853  742422 cri.go:135] skipping {b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1 stopped}: state = "stopped", want "paused"
	I0226 12:24:00.607919  742422 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 12:24:00.617374  742422 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0226 12:24:00.617398  742422 kubeadm.go:636] restartCluster start
	I0226 12:24:00.617465  742422 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0226 12:24:00.626335  742422 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:00.627035  742422 kubeconfig.go:92] found "pause-534129" server: "https://192.168.76.2:8443"
	I0226 12:24:00.627949  742422 kapi.go:59] client config for pause-534129: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/client.crt", KeyFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/client.key", CAFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1702d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 12:24:00.628591  742422 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0226 12:24:00.637990  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:00.638102  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:00.648969  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:01.138207  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:01.138416  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:01.150519  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:01.638081  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:01.638183  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:01.651222  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:02.138892  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:02.138994  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:02.150804  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:02.638112  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:02.638231  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:02.648428  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:03.139107  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:03.139244  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:03.149751  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:03.638118  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:03.638258  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:03.648809  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:04.138130  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:04.138234  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:04.152006  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:04.638712  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:04.638794  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:04.655675  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:05.138156  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:05.138240  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:05.185357  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:05.638828  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:05.638933  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:05.667954  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:06.138583  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:06.138675  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:06.160398  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:06.638018  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:06.638114  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:06.650081  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:07.138811  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:07.138932  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:07.155375  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:07.639072  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:07.639167  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:07.652551  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:08.138093  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:08.138199  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:08.150379  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:08.638785  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:08.638872  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:08.652379  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:09.139023  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:09.139124  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:09.152071  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:09.638662  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:09.638781  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:09.650520  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:10.138420  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:10.138528  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:10.150352  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:10.638021  742422 api_server.go:166] Checking apiserver status ...
	I0226 12:24:10.638138  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0226 12:24:10.648441  742422 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:10.648467  742422 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0226 12:24:10.648476  742422 kubeadm.go:1135] stopping kube-system containers ...
	I0226 12:24:10.648493  742422 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0226 12:24:10.648578  742422 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0226 12:24:10.689893  742422 cri.go:89] found id: "1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c"
	I0226 12:24:10.689917  742422 cri.go:89] found id: "c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e"
	I0226 12:24:10.689921  742422 cri.go:89] found id: "e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac"
	I0226 12:24:10.689925  742422 cri.go:89] found id: "912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed"
	I0226 12:24:10.689929  742422 cri.go:89] found id: "5152ba0b726063b0f6990e24c0b6d98a30a1bc360b9d99690ca99ac165a00599"
	I0226 12:24:10.689933  742422 cri.go:89] found id: "b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1"
	I0226 12:24:10.689936  742422 cri.go:89] found id: "479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7"
	I0226 12:24:10.689939  742422 cri.go:89] found id: "7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0"
	I0226 12:24:10.689942  742422 cri.go:89] found id: "12de2891c734039891518f928b495affe6cddeedd038c4798071ec81470bfd1d"
	I0226 12:24:10.689947  742422 cri.go:89] found id: "46f68cd5dd2a2b28f43fa34abe8c41cb4e389d0385ac597a78eac4a057b3e442"
	I0226 12:24:10.689950  742422 cri.go:89] found id: "33889c34d2e73bc4fa847a38bdff924ad652989e35cc952e04d56d1a137192a8"
	I0226 12:24:10.689953  742422 cri.go:89] found id: ""
	I0226 12:24:10.689958  742422 cri.go:234] Stopping containers: [1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac 912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed 5152ba0b726063b0f6990e24c0b6d98a30a1bc360b9d99690ca99ac165a00599 b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1 479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7 7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0 12de2891c734039891518f928b495affe6cddeedd038c4798071ec81470bfd1d 46f68cd5dd2a2b28f43fa34abe8c41cb4e389d0385ac597a78eac4a057b3e442 33889c34d2e73bc4fa847a38bdff924ad652989e35cc952e04d56d1a137192a8]
	I0226 12:24:10.690024  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:24:10.693494  742422 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac 912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed 5152ba0b726063b0f6990e24c0b6d98a30a1bc360b9d99690ca99ac165a00599 b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1 479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7 7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0 12de2891c734039891518f928b495affe6cddeedd038c4798071ec81470bfd1d 46f68cd5dd2a2b28f43fa34abe8c41cb4e389d0385ac597a78eac4a057b3e442 33889c34d2e73bc4fa847a38bdff924ad652989e35cc952e04d56d1a137192a8
	I0226 12:24:19.514338  742422 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac 912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed 5152ba0b726063b0f6990e24c0b6d98a30a1bc360b9d99690ca99ac165a00599 b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1 479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7 7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0 12de2891c734039891518f928b495affe6cddeedd038c4798071ec81470bfd1d 46f68cd5dd2a2b28f43fa34abe8c41cb4e389d0385ac597a78eac4a057b3e442 33889c34d2e73bc4fa847a38bdff924ad652989e35cc952e04d56d1a137192a8: (8.820784254s)
	W0226 12:24:19.514404  742422 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac 912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed 5152ba0b726063b0f6990e24c0b6d98a30a1bc360b9d99690ca99ac165a00599 b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1 479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7 7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0 12de2891c734039891518f928b495affe6cddeedd038c4798071ec81470bfd1d 46f68cd5dd2a2b28f43fa34abe8c41cb4e389d0385ac597a78eac4a057b3e442 33889c34d2e73bc4fa847a38bdff924ad652989e35cc952e04d56d1a137192a8: Process exited with status 1
	stdout:
	1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c
	c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e
	e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac
	912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed
	5152ba0b726063b0f6990e24c0b6d98a30a1bc360b9d99690ca99ac165a00599
	b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1
	479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7
	
	stderr:
	E0226 12:24:19.510468    2718 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0\": container with ID starting with 7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0 not found: ID does not exist" containerID="7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0"
	time="2024-02-26T12:24:19Z" level=fatal msg="stopping the container \"7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0\": rpc error: code = NotFound desc = could not find container \"7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0\": container with ID starting with 7c2b2153cd5c612b6606d56fdb4379cab889ce1c1b797e8088469a24b27d3ec0 not found: ID does not exist"
	I0226 12:24:19.514561  742422 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0226 12:24:19.607966  742422 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 12:24:19.635504  742422 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 26 12:23 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb 26 12:23 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Feb 26 12:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 26 12:23 /etc/kubernetes/scheduler.conf
	
	I0226 12:24:19.635582  742422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0226 12:24:19.653300  742422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0226 12:24:19.682231  742422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0226 12:24:19.706458  742422 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:19.706528  742422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0226 12:24:19.725258  742422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0226 12:24:19.746991  742422 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0226 12:24:19.747058  742422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0226 12:24:19.761518  742422 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 12:24:19.771496  742422 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0226 12:24:19.771575  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 12:24:19.859972  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 12:24:21.710406  742422 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.850400223s)
	I0226 12:24:21.710436  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0226 12:24:21.935083  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 12:24:22.028633  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0226 12:24:22.192726  742422 api_server.go:52] waiting for apiserver process to appear ...
	I0226 12:24:22.192818  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 12:24:22.251971  742422 api_server.go:72] duration metric: took 59.243664ms to wait for apiserver process to appear ...
	I0226 12:24:22.251995  742422 api_server.go:88] waiting for apiserver healthz status ...
	I0226 12:24:22.252014  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:24.263223  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:24.263263  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:24.263277  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:26.273249  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:26.273285  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:26.273305  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:28.286702  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:28.286794  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:28.286823  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:30.296581  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:30.296631  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:30.296693  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:32.308387  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:32.308423  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:32.308437  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:34.318014  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:34.318043  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:34.318058  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:36.328784  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:36.328820  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:36.328864  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:38.339211  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:38.339241  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:38.339257  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:40.348836  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:40.348865  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:40.348877  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:42.357879  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:42.357915  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:42.357931  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:44.367742  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:44.367772  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:44.367786  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:46.377579  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:46.377614  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:46.377628  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:48.386827  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:48.386881  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:48.386900  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:50.395863  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:50.395897  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:50.395913  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:52.405364  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:52.405395  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:52.405411  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:54.414401  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:54.414440  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:54.414453  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:56.423148  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:56.423185  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:56.423198  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:24:58.432658  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:24:58.432715  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:24:58.432729  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:00.442483  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:00.442523  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:00.442545  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:02.452813  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:02.452848  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:02.452863  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:04.463716  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:04.463748  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:04.463763  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:06.473732  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:06.473776  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:06.473790  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:08.485775  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:08.485807  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:08.485824  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:10.494849  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:10.494894  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:10.494912  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:12.505293  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:12.505329  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:12.505376  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:14.517173  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:14.517200  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:14.517214  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:16.527569  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:16.527624  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:16.527651  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:18.538803  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:18.538836  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:18.538850  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:20.041974  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": EOF
	I0226 12:25:20.042023  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:20.098858  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:35598->192.168.76.2:8443: read: connection reset by peer
	I0226 12:25:20.252354  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:20.252704  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0226 12:25:20.752704  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:20.753132  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0226 12:25:21.252601  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:21.253017  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0226 12:25:21.752495  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:21.752859  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0226 12:25:22.252517  742422 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0226 12:25:22.252615  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0226 12:25:22.311546  742422 cri.go:89] found id: "ee1c307b25cee84a6ed691ac79bdbf66fe243ae67eeed806ab2ca9188bcc6013"
	I0226 12:25:22.311565  742422 cri.go:89] found id: ""
	I0226 12:25:22.311573  742422 logs.go:276] 1 containers: [ee1c307b25cee84a6ed691ac79bdbf66fe243ae67eeed806ab2ca9188bcc6013]
	I0226 12:25:22.311638  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.322785  742422 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0226 12:25:22.322861  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0226 12:25:22.394406  742422 cri.go:89] found id: "fa9de77e9f95fbe9d4edb231f338c859854cbad6bb067bbc9e4878c99bf3dd96"
	I0226 12:25:22.394477  742422 cri.go:89] found id: "e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac"
	I0226 12:25:22.394499  742422 cri.go:89] found id: ""
	I0226 12:25:22.394525  742422 logs.go:276] 2 containers: [fa9de77e9f95fbe9d4edb231f338c859854cbad6bb067bbc9e4878c99bf3dd96 e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac]
	I0226 12:25:22.394615  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.414049  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.435232  742422 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0226 12:25:22.435312  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0226 12:25:22.520247  742422 cri.go:89] found id: "1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c"
	I0226 12:25:22.520317  742422 cri.go:89] found id: ""
	I0226 12:25:22.520340  742422 logs.go:276] 1 containers: [1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c]
	I0226 12:25:22.520429  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.531784  742422 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0226 12:25:22.531901  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0226 12:25:22.598302  742422 cri.go:89] found id: "48e1c9fe1bef8a7a1668683a53f33ed6da71035b818e348421f28421d0c375a8"
	I0226 12:25:22.598375  742422 cri.go:89] found id: "c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e"
	I0226 12:25:22.598395  742422 cri.go:89] found id: ""
	I0226 12:25:22.598419  742422 logs.go:276] 2 containers: [48e1c9fe1bef8a7a1668683a53f33ed6da71035b818e348421f28421d0c375a8 c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e]
	I0226 12:25:22.598536  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.602463  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.605792  742422 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0226 12:25:22.605902  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0226 12:25:22.666877  742422 cri.go:89] found id: "8e4d16140ea0b33e2b235d2b4233d1de3c2760692719185ea8efc8c25e287bfc"
	I0226 12:25:22.666950  742422 cri.go:89] found id: "479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7"
	I0226 12:25:22.666968  742422 cri.go:89] found id: ""
	I0226 12:25:22.666990  742422 logs.go:276] 2 containers: [8e4d16140ea0b33e2b235d2b4233d1de3c2760692719185ea8efc8c25e287bfc 479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7]
	I0226 12:25:22.667072  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.679472  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.687441  742422 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0226 12:25:22.687556  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0226 12:25:22.751396  742422 cri.go:89] found id: "75edd03cac47e5d5d8d371f1df9d4621bfa5f06cf53611bf6dc131eb31f13816"
	I0226 12:25:22.751466  742422 cri.go:89] found id: "912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed"
	I0226 12:25:22.751485  742422 cri.go:89] found id: ""
	I0226 12:25:22.751511  742422 logs.go:276] 2 containers: [75edd03cac47e5d5d8d371f1df9d4621bfa5f06cf53611bf6dc131eb31f13816 912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed]
	I0226 12:25:22.751631  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.755363  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.758995  742422 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0226 12:25:22.759118  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0226 12:25:22.807789  742422 cri.go:89] found id: "37b20f6e4c3363990b1728031036b5bb9b7d7982cc23f183eb0e6c08f2f80e9e"
	I0226 12:25:22.807849  742422 cri.go:89] found id: "b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1"
	I0226 12:25:22.807872  742422 cri.go:89] found id: ""
	I0226 12:25:22.807895  742422 logs.go:276] 2 containers: [37b20f6e4c3363990b1728031036b5bb9b7d7982cc23f183eb0e6c08f2f80e9e b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1]
	I0226 12:25:22.807980  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.811733  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.815278  742422 logs.go:123] Gathering logs for kubelet ...
	I0226 12:25:22.815340  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 12:25:22.979815  742422 logs.go:123] Gathering logs for describe nodes ...
	I0226 12:25:22.979856  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0226 12:25:26.822451  742422 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.842570745s)
	I0226 12:25:26.826482  742422 logs.go:123] Gathering logs for kube-controller-manager [912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed] ...
	I0226 12:25:26.826527  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed"
	I0226 12:25:26.921105  742422 logs.go:123] Gathering logs for CRI-O ...
	I0226 12:25:26.921131  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0226 12:25:27.067849  742422 logs.go:123] Gathering logs for etcd [fa9de77e9f95fbe9d4edb231f338c859854cbad6bb067bbc9e4878c99bf3dd96] ...
	I0226 12:25:27.067928  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa9de77e9f95fbe9d4edb231f338c859854cbad6bb067bbc9e4878c99bf3dd96"
	I0226 12:25:27.154105  742422 logs.go:123] Gathering logs for etcd [e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac] ...
	I0226 12:25:27.154231  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac"
	I0226 12:25:27.230632  742422 logs.go:123] Gathering logs for kube-scheduler [c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e] ...
	I0226 12:25:27.230717  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e"
	I0226 12:25:27.307524  742422 logs.go:123] Gathering logs for kindnet [37b20f6e4c3363990b1728031036b5bb9b7d7982cc23f183eb0e6c08f2f80e9e] ...
	I0226 12:25:27.307560  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37b20f6e4c3363990b1728031036b5bb9b7d7982cc23f183eb0e6c08f2f80e9e"
	I0226 12:25:27.430207  742422 logs.go:123] Gathering logs for kindnet [b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1] ...
	I0226 12:25:27.430279  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1"
	I0226 12:25:27.575925  742422 logs.go:123] Gathering logs for kube-controller-manager [75edd03cac47e5d5d8d371f1df9d4621bfa5f06cf53611bf6dc131eb31f13816] ...
	I0226 12:25:27.575950  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75edd03cac47e5d5d8d371f1df9d4621bfa5f06cf53611bf6dc131eb31f13816"
	I0226 12:25:27.680883  742422 logs.go:123] Gathering logs for dmesg ...
	I0226 12:25:27.680908  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 12:25:27.715237  742422 logs.go:123] Gathering logs for coredns [1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c] ...
	I0226 12:25:27.715311  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c"
	I0226 12:25:27.839857  742422 logs.go:123] Gathering logs for kube-scheduler [48e1c9fe1bef8a7a1668683a53f33ed6da71035b818e348421f28421d0c375a8] ...
	I0226 12:25:27.839881  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e1c9fe1bef8a7a1668683a53f33ed6da71035b818e348421f28421d0c375a8"
	I0226 12:25:27.907549  742422 logs.go:123] Gathering logs for kube-proxy [8e4d16140ea0b33e2b235d2b4233d1de3c2760692719185ea8efc8c25e287bfc] ...
	I0226 12:25:27.907623  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e4d16140ea0b33e2b235d2b4233d1de3c2760692719185ea8efc8c25e287bfc"
	I0226 12:25:27.967584  742422 logs.go:123] Gathering logs for kube-proxy [479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7] ...
	I0226 12:25:27.967690  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7"
	I0226 12:25:28.032887  742422 logs.go:123] Gathering logs for kube-apiserver [ee1c307b25cee84a6ed691ac79bdbf66fe243ae67eeed806ab2ca9188bcc6013] ...
	I0226 12:25:28.032915  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee1c307b25cee84a6ed691ac79bdbf66fe243ae67eeed806ab2ca9188bcc6013"
	I0226 12:25:28.162058  742422 logs.go:123] Gathering logs for container status ...
	I0226 12:25:28.162138  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 12:25:30.739455  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:30.753922  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0226 12:25:30.789204  742422 api_server.go:141] control plane version: v1.28.4
	I0226 12:25:30.789239  742422 api_server.go:131] duration metric: took 1m8.537236202s to wait for apiserver health ...
	I0226 12:25:30.789250  742422 cni.go:84] Creating CNI manager for ""
	I0226 12:25:30.789257  742422 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 12:25:30.792836  742422 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0226 12:25:30.794744  742422 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0226 12:25:30.799542  742422 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0226 12:25:30.799567  742422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0226 12:25:30.819612  742422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0226 12:25:32.204645  742422 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.384993801s)
	I0226 12:25:32.204697  742422 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 12:25:32.214059  742422 system_pods.go:59] 7 kube-system pods found
	I0226 12:25:32.214157  742422 system_pods.go:61] "coredns-5dd5756b68-jphcc" [1389dc8e-2557-486e-be8a-598958aa8372] Running
	I0226 12:25:32.214185  742422 system_pods.go:61] "etcd-pause-534129" [458dda30-3b78-4276-9529-49adfbcadc22] Running
	I0226 12:25:32.214209  742422 system_pods.go:61] "kindnet-zgq8r" [10aa1f57-33c0-4f80-b9dc-ac083e1b47c3] Running
	I0226 12:25:32.214242  742422 system_pods.go:61] "kube-apiserver-pause-534129" [c2fabc8f-4ef0-4904-88e2-61c5677dc00e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0226 12:25:32.214263  742422 system_pods.go:61] "kube-controller-manager-pause-534129" [f2e96412-fff9-40c8-bdaf-3fb61ea1f0b9] Running
	I0226 12:25:32.214290  742422 system_pods.go:61] "kube-proxy-6stnr" [ddf2146c-15dd-4280-b05f-6476a69b62a2] Running
	I0226 12:25:32.214315  742422 system_pods.go:61] "kube-scheduler-pause-534129" [d2d23bbe-5a4c-4613-9d53-baa17af001cc] Running
	I0226 12:25:32.214340  742422 system_pods.go:74] duration metric: took 9.635341ms to wait for pod list to return data ...
	I0226 12:25:32.214362  742422 node_conditions.go:102] verifying NodePressure condition ...
	I0226 12:25:32.219535  742422 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0226 12:25:32.219607  742422 node_conditions.go:123] node cpu capacity is 2
	I0226 12:25:32.219640  742422 node_conditions.go:105] duration metric: took 5.25819ms to run NodePressure ...
	I0226 12:25:32.219683  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 12:25:32.451457  742422 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0226 12:25:32.461024  742422 kubeadm.go:787] kubelet initialised
	I0226 12:25:32.461100  742422 kubeadm.go:788] duration metric: took 9.580171ms waiting for restarted kubelet to initialise ...
	I0226 12:25:32.461124  742422 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:25:32.468963  742422 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:32.479836  742422 pod_ready.go:92] pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:32.479858  742422 pod_ready.go:81] duration metric: took 10.825131ms waiting for pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:32.479873  742422 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:32.491981  742422 pod_ready.go:92] pod "etcd-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:32.492048  742422 pod_ready.go:81] duration metric: took 12.166482ms waiting for pod "etcd-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:32.492079  742422 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:34.500037  742422 pod_ready.go:102] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"False"
	I0226 12:25:36.501071  742422 pod_ready.go:102] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"False"
	I0226 12:25:39.004252  742422 pod_ready.go:102] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"False"
	I0226 12:25:41.517429  742422 pod_ready.go:102] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"False"
	I0226 12:25:42.000467  742422 pod_ready.go:92] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.000554  742422 pod_ready.go:81] duration metric: took 9.508452394s waiting for pod "kube-apiserver-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.000581  742422 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.009072  742422 pod_ready.go:92] pod "kube-controller-manager-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.009098  742422 pod_ready.go:81] duration metric: took 8.495092ms waiting for pod "kube-controller-manager-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.009110  742422 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6stnr" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.018114  742422 pod_ready.go:92] pod "kube-proxy-6stnr" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.018195  742422 pod_ready.go:81] duration metric: took 9.077393ms waiting for pod "kube-proxy-6stnr" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.018223  742422 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.027431  742422 pod_ready.go:92] pod "kube-scheduler-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.027499  742422 pod_ready.go:81] duration metric: took 9.255833ms waiting for pod "kube-scheduler-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.027524  742422 pod_ready.go:38] duration metric: took 9.566372506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:25:42.027577  742422 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0226 12:25:42.039342  742422 ops.go:34] apiserver oom_adj: -16
	I0226 12:25:42.039413  742422 kubeadm.go:640] restartCluster took 1m41.422006574s
	I0226 12:25:42.039437  742422 kubeadm.go:406] StartCluster complete in 1m41.495813863s
	I0226 12:25:42.039485  742422 settings.go:142] acquiring lock: {Name:mk1588246e1eeb31f86f63cf3c470d51f6fe64da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:42.039581  742422 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 12:25:42.040374  742422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/kubeconfig: {Name:mk0efe1f972316757632066327a27c71356b5734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:42.040706  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0226 12:25:42.041053  742422 config.go:182] Loaded profile config "pause-534129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 12:25:42.041090  742422 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0226 12:25:42.043562  742422 out.go:177] * Enabled addons: 
	I0226 12:25:42.042134  742422 kapi.go:59] client config for pause-534129: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/client.crt", KeyFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/client.key", CAFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1702d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 12:25:42.045975  742422 addons.go:505] enable addons completed in 4.881948ms: enabled=[]
	I0226 12:25:42.050182  742422 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-534129" context rescaled to 1 replicas
	I0226 12:25:42.050264  742422 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0226 12:25:42.053963  742422 out.go:177] * Verifying Kubernetes components...
	I0226 12:25:42.056049  742422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 12:25:42.259970  742422 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0226 12:25:42.260021  742422 node_ready.go:35] waiting up to 6m0s for node "pause-534129" to be "Ready" ...
	I0226 12:25:42.265667  742422 node_ready.go:49] node "pause-534129" has status "Ready":"True"
	I0226 12:25:42.265689  742422 node_ready.go:38] duration metric: took 5.653477ms waiting for node "pause-534129" to be "Ready" ...
	I0226 12:25:42.265700  742422 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:25:42.277061  742422 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.396840  742422 pod_ready.go:92] pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.396867  742422 pod_ready.go:81] duration metric: took 119.730999ms waiting for pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.396880  742422 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.797380  742422 pod_ready.go:92] pod "etcd-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.797409  742422 pod_ready.go:81] duration metric: took 400.521397ms waiting for pod "etcd-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.797424  742422 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.196978  742422 pod_ready.go:92] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:43.197007  742422 pod_ready.go:81] duration metric: took 399.574923ms waiting for pod "kube-apiserver-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.197026  742422 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.597217  742422 pod_ready.go:92] pod "kube-controller-manager-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:43.597255  742422 pod_ready.go:81] duration metric: took 400.210974ms waiting for pod "kube-controller-manager-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.597267  742422 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6stnr" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.996858  742422 pod_ready.go:92] pod "kube-proxy-6stnr" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:43.996880  742422 pod_ready.go:81] duration metric: took 399.605052ms waiting for pod "kube-proxy-6stnr" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.996892  742422 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:44.396880  742422 pod_ready.go:92] pod "kube-scheduler-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:44.396954  742422 pod_ready.go:81] duration metric: took 400.052014ms waiting for pod "kube-scheduler-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:44.396981  742422 pod_ready.go:38] duration metric: took 2.131269367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:25:44.397025  742422 api_server.go:52] waiting for apiserver process to appear ...
	I0226 12:25:44.397109  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 12:25:44.409474  742422 api_server.go:72] duration metric: took 2.359155863s to wait for apiserver process to appear ...
	I0226 12:25:44.409540  742422 api_server.go:88] waiting for apiserver healthz status ...
	I0226 12:25:44.409576  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:44.418165  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0226 12:25:44.420995  742422 api_server.go:141] control plane version: v1.28.4
	I0226 12:25:44.421064  742422 api_server.go:131] duration metric: took 11.501042ms to wait for apiserver health ...
	I0226 12:25:44.421088  742422 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 12:25:44.599968  742422 system_pods.go:59] 7 kube-system pods found
	I0226 12:25:44.600002  742422 system_pods.go:61] "coredns-5dd5756b68-jphcc" [1389dc8e-2557-486e-be8a-598958aa8372] Running
	I0226 12:25:44.600008  742422 system_pods.go:61] "etcd-pause-534129" [458dda30-3b78-4276-9529-49adfbcadc22] Running
	I0226 12:25:44.600012  742422 system_pods.go:61] "kindnet-zgq8r" [10aa1f57-33c0-4f80-b9dc-ac083e1b47c3] Running
	I0226 12:25:44.600016  742422 system_pods.go:61] "kube-apiserver-pause-534129" [c2fabc8f-4ef0-4904-88e2-61c5677dc00e] Running
	I0226 12:25:44.600053  742422 system_pods.go:61] "kube-controller-manager-pause-534129" [f2e96412-fff9-40c8-bdaf-3fb61ea1f0b9] Running
	I0226 12:25:44.600064  742422 system_pods.go:61] "kube-proxy-6stnr" [ddf2146c-15dd-4280-b05f-6476a69b62a2] Running
	I0226 12:25:44.600075  742422 system_pods.go:61] "kube-scheduler-pause-534129" [d2d23bbe-5a4c-4613-9d53-baa17af001cc] Running
	I0226 12:25:44.600081  742422 system_pods.go:74] duration metric: took 178.972806ms to wait for pod list to return data ...
	I0226 12:25:44.600093  742422 default_sa.go:34] waiting for default service account to be created ...
	I0226 12:25:44.796417  742422 default_sa.go:45] found service account: "default"
	I0226 12:25:44.796446  742422 default_sa.go:55] duration metric: took 196.346157ms for default service account to be created ...
	I0226 12:25:44.796459  742422 system_pods.go:116] waiting for k8s-apps to be running ...
	I0226 12:25:45.000222  742422 system_pods.go:86] 7 kube-system pods found
	I0226 12:25:45.000264  742422 system_pods.go:89] "coredns-5dd5756b68-jphcc" [1389dc8e-2557-486e-be8a-598958aa8372] Running
	I0226 12:25:45.000272  742422 system_pods.go:89] "etcd-pause-534129" [458dda30-3b78-4276-9529-49adfbcadc22] Running
	I0226 12:25:45.000277  742422 system_pods.go:89] "kindnet-zgq8r" [10aa1f57-33c0-4f80-b9dc-ac083e1b47c3] Running
	I0226 12:25:45.000281  742422 system_pods.go:89] "kube-apiserver-pause-534129" [c2fabc8f-4ef0-4904-88e2-61c5677dc00e] Running
	I0226 12:25:45.000285  742422 system_pods.go:89] "kube-controller-manager-pause-534129" [f2e96412-fff9-40c8-bdaf-3fb61ea1f0b9] Running
	I0226 12:25:45.000289  742422 system_pods.go:89] "kube-proxy-6stnr" [ddf2146c-15dd-4280-b05f-6476a69b62a2] Running
	I0226 12:25:45.000293  742422 system_pods.go:89] "kube-scheduler-pause-534129" [d2d23bbe-5a4c-4613-9d53-baa17af001cc] Running
	I0226 12:25:45.000301  742422 system_pods.go:126] duration metric: took 203.836643ms to wait for k8s-apps to be running ...
	I0226 12:25:45.000312  742422 system_svc.go:44] waiting for kubelet service to be running ....
	I0226 12:25:45.000394  742422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 12:25:45.039346  742422 system_svc.go:56] duration metric: took 39.020604ms WaitForService to wait for kubelet.
	I0226 12:25:45.039386  742422 kubeadm.go:581] duration metric: took 2.989077825s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0226 12:25:45.039408  742422 node_conditions.go:102] verifying NodePressure condition ...
	I0226 12:25:45.197578  742422 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0226 12:25:45.197615  742422 node_conditions.go:123] node cpu capacity is 2
	I0226 12:25:45.197684  742422 node_conditions.go:105] duration metric: took 158.248997ms to run NodePressure ...
	I0226 12:25:45.197704  742422 start.go:228] waiting for startup goroutines ...
	I0226 12:25:45.197712  742422 start.go:233] waiting for cluster config update ...
	I0226 12:25:45.197726  742422 start.go:242] writing updated cluster config ...
	I0226 12:25:45.198124  742422 ssh_runner.go:195] Run: rm -f paused
	I0226 12:25:45.275137  742422 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0226 12:25:45.277518  742422 out.go:177] * Done! kubectl is now configured to use "pause-534129" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-534129
helpers_test.go:235: (dbg) docker inspect pause-534129:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "407f1ca7602e4a32716a071088ff1960d80303110c592ded53aae3915daead03",
	        "Created": "2024-02-26T12:22:59.313548572Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 739196,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T12:22:59.704349543Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83c2925cbfe44b87b5c0672f16807927d87d8625e89de4dc154c45daaaa04b5b",
	        "ResolvConfPath": "/var/lib/docker/containers/407f1ca7602e4a32716a071088ff1960d80303110c592ded53aae3915daead03/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/407f1ca7602e4a32716a071088ff1960d80303110c592ded53aae3915daead03/hostname",
	        "HostsPath": "/var/lib/docker/containers/407f1ca7602e4a32716a071088ff1960d80303110c592ded53aae3915daead03/hosts",
	        "LogPath": "/var/lib/docker/containers/407f1ca7602e4a32716a071088ff1960d80303110c592ded53aae3915daead03/407f1ca7602e4a32716a071088ff1960d80303110c592ded53aae3915daead03-json.log",
	        "Name": "/pause-534129",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-534129:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-534129",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/da9b8555b43bef2db272a4e1e410ef1afbe3f774ff1d566608f87b9c0ae201c5-init/diff:/var/lib/docker/overlay2/f0e0da57c811333114b7a0181d8121ec20f9baacbcf19d34fad5038b1792b1cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da9b8555b43bef2db272a4e1e410ef1afbe3f774ff1d566608f87b9c0ae201c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da9b8555b43bef2db272a4e1e410ef1afbe3f774ff1d566608f87b9c0ae201c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da9b8555b43bef2db272a4e1e410ef1afbe3f774ff1d566608f87b9c0ae201c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-534129",
	                "Source": "/var/lib/docker/volumes/pause-534129/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-534129",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-534129",
	                "name.minikube.sigs.k8s.io": "pause-534129",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7614b7483eb66d795566f70a5b05b3169409fb71a7fe2ed6e5c62994fe0ff3ed",
	            "SandboxKey": "/var/run/docker/netns/7614b7483eb6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36996"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36995"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36992"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36994"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36993"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-534129": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "407f1ca7602e",
	                        "pause-534129"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "a3925384b61b0fa90f6ce71c7bcdb598ab12b2d4d4c59b2615f9495baf5587ce",
	                    "EndpointID": "6b097f85b8e161b1a5003ab18e6c12de169a9c35b6ea2ba835b5ad0678f6b34c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "pause-534129",
	                        "407f1ca7602e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-534129 -n pause-534129
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-534129 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-534129 logs -n 25: (2.356123589s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-737289            | NoKubernetes-737289       | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC | 26 Feb 24 12:18 UTC |
	| start   | -p NoKubernetes-737289            | NoKubernetes-737289       | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC | 26 Feb 24 12:18 UTC |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-737289 sudo       | NoKubernetes-737289       | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-269815         | missing-upgrade-269815    | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC | 26 Feb 24 12:19 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-737289            | NoKubernetes-737289       | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC | 26 Feb 24 12:18 UTC |
	| start   | -p NoKubernetes-737289            | NoKubernetes-737289       | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC | 26 Feb 24 12:18 UTC |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-737289 sudo       | NoKubernetes-737289       | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-737289            | NoKubernetes-737289       | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC | 26 Feb 24 12:18 UTC |
	| start   | -p kubernetes-upgrade-647247      | kubernetes-upgrade-647247 | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC | 26 Feb 24 12:19 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-269815         | missing-upgrade-269815    | jenkins | v1.32.0 | 26 Feb 24 12:19 UTC | 26 Feb 24 12:19 UTC |
	| start   | -p stopped-upgrade-535150         | minikube                  | jenkins | v1.26.0 | 26 Feb 24 12:19 UTC | 26 Feb 24 12:20 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --vm-driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-647247      | kubernetes-upgrade-647247 | jenkins | v1.32.0 | 26 Feb 24 12:19 UTC | 26 Feb 24 12:19 UTC |
	| start   | -p kubernetes-upgrade-647247      | kubernetes-upgrade-647247 | jenkins | v1.32.0 | 26 Feb 24 12:19 UTC | 26 Feb 24 12:24 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-535150 stop       | minikube                  | jenkins | v1.26.0 | 26 Feb 24 12:20 UTC | 26 Feb 24 12:20 UTC |
	| start   | -p stopped-upgrade-535150         | stopped-upgrade-535150    | jenkins | v1.32.0 | 26 Feb 24 12:20 UTC | 26 Feb 24 12:20 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-535150         | stopped-upgrade-535150    | jenkins | v1.32.0 | 26 Feb 24 12:21 UTC | 26 Feb 24 12:21 UTC |
	| start   | -p running-upgrade-462105         | minikube                  | jenkins | v1.26.0 | 26 Feb 24 12:21 UTC | 26 Feb 24 12:21 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --vm-driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p running-upgrade-462105         | running-upgrade-462105    | jenkins | v1.32.0 | 26 Feb 24 12:21 UTC | 26 Feb 24 12:22 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-462105         | running-upgrade-462105    | jenkins | v1.32.0 | 26 Feb 24 12:22 UTC | 26 Feb 24 12:22 UTC |
	| start   | -p pause-534129 --memory=2048     | pause-534129              | jenkins | v1.32.0 | 26 Feb 24 12:22 UTC | 26 Feb 24 12:23 UTC |
	|         | --install-addons=false            |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker        |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p pause-534129                   | pause-534129              | jenkins | v1.32.0 | 26 Feb 24 12:23 UTC | 26 Feb 24 12:25 UTC |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-647247      | kubernetes-upgrade-647247 | jenkins | v1.32.0 | 26 Feb 24 12:24 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-647247      | kubernetes-upgrade-647247 | jenkins | v1.32.0 | 26 Feb 24 12:24 UTC | 26 Feb 24 12:25 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-647247      | kubernetes-upgrade-647247 | jenkins | v1.32.0 | 26 Feb 24 12:25 UTC | 26 Feb 24 12:25 UTC |
	| start   | -p force-systemd-flag-700637      | force-systemd-flag-700637 | jenkins | v1.32.0 | 26 Feb 24 12:25 UTC |                     |
	|         | --memory=2048 --force-systemd     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 12:25:13
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 12:25:13.687294  747648 out.go:291] Setting OutFile to fd 1 ...
	I0226 12:25:13.687478  747648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 12:25:13.687491  747648 out.go:304] Setting ErrFile to fd 2...
	I0226 12:25:13.687497  747648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 12:25:13.687809  747648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
	I0226 12:25:13.688290  747648 out.go:298] Setting JSON to false
	I0226 12:25:13.689396  747648 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":90460,"bootTime":1708859854,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0226 12:25:13.689475  747648 start.go:139] virtualization:  
	I0226 12:25:13.693015  747648 out.go:177] * [force-systemd-flag-700637] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0226 12:25:13.695650  747648 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 12:25:13.697508  747648 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 12:25:13.695776  747648 notify.go:220] Checking for updates...
	I0226 12:25:13.701696  747648 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 12:25:13.703738  747648 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	I0226 12:25:13.705782  747648 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0226 12:25:13.707763  747648 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 12:25:13.710688  747648 config.go:182] Loaded profile config "pause-534129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 12:25:13.710795  747648 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 12:25:13.731485  747648 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0226 12:25:13.731603  747648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 12:25:13.800826  747648 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-26 12:25:13.790657964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 12:25:13.800942  747648 docker.go:295] overlay module found
	I0226 12:25:13.803288  747648 out.go:177] * Using the docker driver based on user configuration
	I0226 12:25:13.805276  747648 start.go:299] selected driver: docker
	I0226 12:25:13.805301  747648 start.go:903] validating driver "docker" against <nil>
	I0226 12:25:13.805332  747648 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 12:25:13.806003  747648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 12:25:13.867397  747648 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-26 12:25:13.858447027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 12:25:13.867558  747648 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 12:25:13.867776  747648 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0226 12:25:13.869940  747648 out.go:177] * Using Docker driver with root privileges
	I0226 12:25:13.872256  747648 cni.go:84] Creating CNI manager for ""
	I0226 12:25:13.872283  747648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 12:25:13.872294  747648 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0226 12:25:13.872306  747648 start_flags.go:323] config:
	{Name:force-systemd-flag-700637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-700637 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 12:25:13.875608  747648 out.go:177] * Starting control plane node force-systemd-flag-700637 in cluster force-systemd-flag-700637
	I0226 12:25:13.877495  747648 cache.go:121] Beginning downloading kic base image for docker with crio
	I0226 12:25:13.879464  747648 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 12:25:13.881270  747648 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0226 12:25:13.881324  747648 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0226 12:25:13.881337  747648 cache.go:56] Caching tarball of preloaded images
	I0226 12:25:13.881356  747648 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 12:25:13.881430  747648 preload.go:174] Found /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0226 12:25:13.881440  747648 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0226 12:25:13.881538  747648 profile.go:148] Saving config to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/config.json ...
	I0226 12:25:13.881555  747648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/config.json: {Name:mk8f37d166b96780031b38a61c35ba31df8b188d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:13.897174  747648 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 12:25:13.897203  747648 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 12:25:13.897226  747648 cache.go:194] Successfully downloaded all kic artifacts
	I0226 12:25:13.897254  747648 start.go:365] acquiring machines lock for force-systemd-flag-700637: {Name:mk93b0e487703cd02bc1cda9f90ab0e728164928 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 12:25:13.897380  747648 start.go:369] acquired machines lock for "force-systemd-flag-700637" in 108.608µs
	I0226 12:25:13.897425  747648 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-700637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-700637 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0226 12:25:13.897508  747648 start.go:125] createHost starting for "" (driver="docker")
	I0226 12:25:10.494849  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:10.494894  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:10.494912  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:12.505293  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:12.505329  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:12.505376  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:14.517173  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:14.517200  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:14.517214  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:13.899687  747648 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0226 12:25:13.899990  747648 start.go:159] libmachine.API.Create for "force-systemd-flag-700637" (driver="docker")
	I0226 12:25:13.900024  747648 client.go:168] LocalClient.Create starting
	I0226 12:25:13.900098  747648 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem
	I0226 12:25:13.900140  747648 main.go:141] libmachine: Decoding PEM data...
	I0226 12:25:13.900160  747648 main.go:141] libmachine: Parsing certificate...
	I0226 12:25:13.900219  747648 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem
	I0226 12:25:13.900242  747648 main.go:141] libmachine: Decoding PEM data...
	I0226 12:25:13.900253  747648 main.go:141] libmachine: Parsing certificate...
	I0226 12:25:13.900627  747648 cli_runner.go:164] Run: docker network inspect force-systemd-flag-700637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0226 12:25:13.916123  747648 cli_runner.go:211] docker network inspect force-systemd-flag-700637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0226 12:25:13.916210  747648 network_create.go:281] running [docker network inspect force-systemd-flag-700637] to gather additional debugging logs...
	I0226 12:25:13.916226  747648 cli_runner.go:164] Run: docker network inspect force-systemd-flag-700637
	W0226 12:25:13.933643  747648 cli_runner.go:211] docker network inspect force-systemd-flag-700637 returned with exit code 1
	I0226 12:25:13.933673  747648 network_create.go:284] error running [docker network inspect force-systemd-flag-700637]: docker network inspect force-systemd-flag-700637: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-700637 not found
	I0226 12:25:13.933762  747648 network_create.go:286] output of [docker network inspect force-systemd-flag-700637]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-700637 not found
	
	** /stderr **
	I0226 12:25:13.933879  747648 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 12:25:13.949929  747648 network.go:212] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2477e72d3a54 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ec:44:d1:b6} reservation:<nil>}
	I0226 12:25:13.950285  747648 network.go:212] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9c91f8a50e5b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:f6:83:3b:99} reservation:<nil>}
	I0226 12:25:13.950777  747648 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400257a6d0}
	I0226 12:25:13.950802  747648 network_create.go:124] attempt to create docker network force-systemd-flag-700637 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0226 12:25:13.950870  747648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-700637 force-systemd-flag-700637
	I0226 12:25:14.013537  747648 network_create.go:108] docker network force-systemd-flag-700637 192.168.67.0/24 created
	I0226 12:25:14.013575  747648 kic.go:121] calculated static IP "192.168.67.2" for the "force-systemd-flag-700637" container
	I0226 12:25:14.013656  747648 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0226 12:25:14.033354  747648 cli_runner.go:164] Run: docker volume create force-systemd-flag-700637 --label name.minikube.sigs.k8s.io=force-systemd-flag-700637 --label created_by.minikube.sigs.k8s.io=true
	I0226 12:25:14.050412  747648 oci.go:103] Successfully created a docker volume force-systemd-flag-700637
	I0226 12:25:14.050499  747648 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-700637-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-700637 --entrypoint /usr/bin/test -v force-systemd-flag-700637:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0226 12:25:14.678593  747648 oci.go:107] Successfully prepared a docker volume force-systemd-flag-700637
	I0226 12:25:14.678659  747648 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0226 12:25:14.678680  747648 kic.go:194] Starting extracting preloaded images to volume ...
	I0226 12:25:14.678779  747648 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-700637:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0226 12:25:16.527569  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:16.527624  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:16.527651  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:18.538803  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:18.538836  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:18.538850  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:20.041974  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": EOF
	I0226 12:25:20.042023  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:19.108174  747648 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-700637:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (4.429351659s)
	I0226 12:25:19.108214  747648 kic.go:203] duration metric: took 4.429530 seconds to extract preloaded images to volume
	W0226 12:25:19.108369  747648 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0226 12:25:19.108490  747648 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0226 12:25:19.188138  747648 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-700637 --name force-systemd-flag-700637 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-700637 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-700637 --network force-systemd-flag-700637 --ip 192.168.67.2 --volume force-systemd-flag-700637:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0226 12:25:19.551026  747648 cli_runner.go:164] Run: docker container inspect force-systemd-flag-700637 --format={{.State.Running}}
	I0226 12:25:19.569105  747648 cli_runner.go:164] Run: docker container inspect force-systemd-flag-700637 --format={{.State.Status}}
	I0226 12:25:19.592269  747648 cli_runner.go:164] Run: docker exec force-systemd-flag-700637 stat /var/lib/dpkg/alternatives/iptables
	I0226 12:25:19.675217  747648 oci.go:144] the created container "force-systemd-flag-700637" has a running status.
	I0226 12:25:19.675254  747648 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa...
	I0226 12:25:19.935834  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0226 12:25:19.935894  747648 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0226 12:25:19.986382  747648 cli_runner.go:164] Run: docker container inspect force-systemd-flag-700637 --format={{.State.Status}}
	I0226 12:25:20.023862  747648 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0226 12:25:20.023890  747648 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-700637 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0226 12:25:20.127387  747648 cli_runner.go:164] Run: docker container inspect force-systemd-flag-700637 --format={{.State.Status}}
	I0226 12:25:20.152487  747648 machine.go:88] provisioning docker machine ...
	I0226 12:25:20.152525  747648 ubuntu.go:169] provisioning hostname "force-systemd-flag-700637"
	I0226 12:25:20.152607  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:20.180844  747648 main.go:141] libmachine: Using SSH client type: native
	I0226 12:25:20.181126  747648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 37001 <nil> <nil>}
	I0226 12:25:20.181144  747648 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-700637 && echo "force-systemd-flag-700637" | sudo tee /etc/hostname
	I0226 12:25:20.181712  747648 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47360->127.0.0.1:37001: read: connection reset by peer
	I0226 12:25:23.372822  747648 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-700637
	
	I0226 12:25:23.372968  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:23.404859  747648 main.go:141] libmachine: Using SSH client type: native
	I0226 12:25:23.405112  747648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 37001 <nil> <nil>}
	I0226 12:25:23.405129  747648 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-700637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-700637/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-700637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 12:25:23.580972  747648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 12:25:23.581002  747648 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18222-608626/.minikube CaCertPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18222-608626/.minikube}
	I0226 12:25:23.581039  747648 ubuntu.go:177] setting up certificates
	I0226 12:25:23.581055  747648 provision.go:83] configureAuth start
	I0226 12:25:23.581119  747648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-700637
	I0226 12:25:23.611328  747648 provision.go:138] copyHostCerts
	I0226 12:25:23.611369  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem
	I0226 12:25:23.611402  747648 exec_runner.go:144] found /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem, removing ...
	I0226 12:25:23.611409  747648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem
	I0226 12:25:23.611485  747648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem (1082 bytes)
	I0226 12:25:23.611576  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem
	I0226 12:25:23.611610  747648 exec_runner.go:144] found /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem, removing ...
	I0226 12:25:23.611615  747648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem
	I0226 12:25:23.611648  747648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem (1123 bytes)
	I0226 12:25:23.611698  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem
	I0226 12:25:23.611715  747648 exec_runner.go:144] found /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem, removing ...
	I0226 12:25:23.611721  747648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem
	I0226 12:25:23.611745  747648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem (1679 bytes)
	I0226 12:25:23.611832  747648 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-700637 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-flag-700637]
	I0226 12:25:20.098858  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:35598->192.168.76.2:8443: read: connection reset by peer
	I0226 12:25:20.252354  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:20.252704  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0226 12:25:20.752704  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:20.753132  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0226 12:25:21.252601  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:21.253017  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0226 12:25:21.752495  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:21.752859  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0226 12:25:22.252517  742422 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0226 12:25:22.252615  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0226 12:25:22.311546  742422 cri.go:89] found id: "ee1c307b25cee84a6ed691ac79bdbf66fe243ae67eeed806ab2ca9188bcc6013"
	I0226 12:25:22.311565  742422 cri.go:89] found id: ""
	I0226 12:25:22.311573  742422 logs.go:276] 1 containers: [ee1c307b25cee84a6ed691ac79bdbf66fe243ae67eeed806ab2ca9188bcc6013]
	I0226 12:25:22.311638  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.322785  742422 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0226 12:25:22.322861  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0226 12:25:22.394406  742422 cri.go:89] found id: "fa9de77e9f95fbe9d4edb231f338c859854cbad6bb067bbc9e4878c99bf3dd96"
	I0226 12:25:22.394477  742422 cri.go:89] found id: "e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac"
	I0226 12:25:22.394499  742422 cri.go:89] found id: ""
	I0226 12:25:22.394525  742422 logs.go:276] 2 containers: [fa9de77e9f95fbe9d4edb231f338c859854cbad6bb067bbc9e4878c99bf3dd96 e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac]
	I0226 12:25:22.394615  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.414049  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.435232  742422 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0226 12:25:22.435312  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0226 12:25:22.520247  742422 cri.go:89] found id: "1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c"
	I0226 12:25:22.520317  742422 cri.go:89] found id: ""
	I0226 12:25:22.520340  742422 logs.go:276] 1 containers: [1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c]
	I0226 12:25:22.520429  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.531784  742422 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0226 12:25:22.531901  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0226 12:25:22.598302  742422 cri.go:89] found id: "48e1c9fe1bef8a7a1668683a53f33ed6da71035b818e348421f28421d0c375a8"
	I0226 12:25:22.598375  742422 cri.go:89] found id: "c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e"
	I0226 12:25:22.598395  742422 cri.go:89] found id: ""
	I0226 12:25:22.598419  742422 logs.go:276] 2 containers: [48e1c9fe1bef8a7a1668683a53f33ed6da71035b818e348421f28421d0c375a8 c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e]
	I0226 12:25:22.598536  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.602463  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.605792  742422 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0226 12:25:22.605902  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0226 12:25:22.666877  742422 cri.go:89] found id: "8e4d16140ea0b33e2b235d2b4233d1de3c2760692719185ea8efc8c25e287bfc"
	I0226 12:25:22.666950  742422 cri.go:89] found id: "479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7"
	I0226 12:25:22.666968  742422 cri.go:89] found id: ""
	I0226 12:25:22.666990  742422 logs.go:276] 2 containers: [8e4d16140ea0b33e2b235d2b4233d1de3c2760692719185ea8efc8c25e287bfc 479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7]
	I0226 12:25:22.667072  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.679472  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.687441  742422 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0226 12:25:22.687556  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0226 12:25:22.751396  742422 cri.go:89] found id: "75edd03cac47e5d5d8d371f1df9d4621bfa5f06cf53611bf6dc131eb31f13816"
	I0226 12:25:22.751466  742422 cri.go:89] found id: "912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed"
	I0226 12:25:22.751485  742422 cri.go:89] found id: ""
	I0226 12:25:22.751511  742422 logs.go:276] 2 containers: [75edd03cac47e5d5d8d371f1df9d4621bfa5f06cf53611bf6dc131eb31f13816 912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed]
	I0226 12:25:22.751631  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.755363  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.758995  742422 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0226 12:25:22.759118  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0226 12:25:22.807789  742422 cri.go:89] found id: "37b20f6e4c3363990b1728031036b5bb9b7d7982cc23f183eb0e6c08f2f80e9e"
	I0226 12:25:22.807849  742422 cri.go:89] found id: "b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1"
	I0226 12:25:22.807872  742422 cri.go:89] found id: ""
	I0226 12:25:22.807895  742422 logs.go:276] 2 containers: [37b20f6e4c3363990b1728031036b5bb9b7d7982cc23f183eb0e6c08f2f80e9e b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1]
	I0226 12:25:22.807980  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.811733  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.815278  742422 logs.go:123] Gathering logs for kubelet ...
	I0226 12:25:22.815340  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 12:25:22.979815  742422 logs.go:123] Gathering logs for describe nodes ...
	I0226 12:25:22.979856  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0226 12:25:24.499507  747648 provision.go:172] copyRemoteCerts
	I0226 12:25:24.499620  747648 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 12:25:24.499679  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:24.520846  747648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37001 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa Username:docker}
	I0226 12:25:24.630710  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0226 12:25:24.630770  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0226 12:25:24.680317  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0226 12:25:24.680425  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0226 12:25:24.718780  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0226 12:25:24.718840  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0226 12:25:24.761641  747648 provision.go:86] duration metric: configureAuth took 1.180567857s
	I0226 12:25:24.761664  747648 ubuntu.go:193] setting minikube options for container-runtime
	I0226 12:25:24.761842  747648 config.go:182] Loaded profile config "force-systemd-flag-700637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 12:25:24.761955  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:24.799612  747648 main.go:141] libmachine: Using SSH client type: native
	I0226 12:25:24.799855  747648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 37001 <nil> <nil>}
	I0226 12:25:24.799870  747648 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0226 12:25:25.119153  747648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0226 12:25:25.119188  747648 machine.go:91] provisioned docker machine in 4.966675867s
	I0226 12:25:25.119200  747648 client.go:171] LocalClient.Create took 11.219164561s
	I0226 12:25:25.119222  747648 start.go:167] duration metric: libmachine.API.Create for "force-systemd-flag-700637" took 11.219231702s
	I0226 12:25:25.119236  747648 start.go:300] post-start starting for "force-systemd-flag-700637" (driver="docker")
	I0226 12:25:25.119248  747648 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 12:25:25.119352  747648 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 12:25:25.119409  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:25.144850  747648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37001 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa Username:docker}
	I0226 12:25:25.263461  747648 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 12:25:25.273374  747648 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 12:25:25.273415  747648 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 12:25:25.273430  747648 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 12:25:25.273438  747648 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 12:25:25.273448  747648 filesync.go:126] Scanning /home/jenkins/minikube-integration/18222-608626/.minikube/addons for local assets ...
	I0226 12:25:25.273504  747648 filesync.go:126] Scanning /home/jenkins/minikube-integration/18222-608626/.minikube/files for local assets ...
	I0226 12:25:25.273596  747648 filesync.go:149] local asset: /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem -> 6139882.pem in /etc/ssl/certs
	I0226 12:25:25.273603  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem -> /etc/ssl/certs/6139882.pem
	I0226 12:25:25.273721  747648 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 12:25:25.288351  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem --> /etc/ssl/certs/6139882.pem (1708 bytes)
	I0226 12:25:25.319208  747648 start.go:303] post-start completed in 199.955865ms
	I0226 12:25:25.319686  747648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-700637
	I0226 12:25:25.346305  747648 profile.go:148] Saving config to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/config.json ...
	I0226 12:25:25.346609  747648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 12:25:25.346652  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:25.384842  747648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37001 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa Username:docker}
	I0226 12:25:25.493092  747648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 12:25:25.498669  747648 start.go:128] duration metric: createHost completed in 11.601142841s
	I0226 12:25:25.498697  747648 start.go:83] releasing machines lock for "force-systemd-flag-700637", held for 11.601303263s
	I0226 12:25:25.498768  747648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-700637
	I0226 12:25:25.516322  747648 ssh_runner.go:195] Run: cat /version.json
	I0226 12:25:25.516389  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:25.516666  747648 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 12:25:25.516741  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:25.548044  747648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37001 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa Username:docker}
	I0226 12:25:25.556497  747648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37001 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa Username:docker}
	I0226 12:25:25.664716  747648 ssh_runner.go:195] Run: systemctl --version
	I0226 12:25:25.801690  747648 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0226 12:25:25.962896  747648 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 12:25:25.967130  747648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0226 12:25:25.995272  747648 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0226 12:25:25.995346  747648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0226 12:25:26.066195  747648 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0226 12:25:26.066265  747648 start.go:475] detecting cgroup driver to use...
	I0226 12:25:26.066292  747648 start.go:479] using "systemd" cgroup driver as enforced via flags
	I0226 12:25:26.066391  747648 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0226 12:25:26.088974  747648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0226 12:25:26.107296  747648 docker.go:217] disabling cri-docker service (if available) ...
	I0226 12:25:26.107443  747648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0226 12:25:26.123671  747648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0226 12:25:26.149748  747648 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0226 12:25:26.274998  747648 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0226 12:25:26.438325  747648 docker.go:233] disabling docker service ...
	I0226 12:25:26.438465  747648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0226 12:25:26.481521  747648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0226 12:25:26.497162  747648 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0226 12:25:26.659370  747648 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0226 12:25:26.836392  747648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0226 12:25:26.849654  747648 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 12:25:26.874642  747648 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0226 12:25:26.874714  747648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 12:25:26.891749  747648 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0226 12:25:26.891819  747648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 12:25:26.907488  747648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 12:25:26.923651  747648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 12:25:26.936553  747648 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 12:25:26.952599  747648 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 12:25:26.967146  747648 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 12:25:26.982371  747648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 12:25:27.178468  747648 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0226 12:25:27.379142  747648 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0226 12:25:27.379215  747648 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0226 12:25:27.391916  747648 start.go:543] Will wait 60s for crictl version
	I0226 12:25:27.392001  747648 ssh_runner.go:195] Run: which crictl
	I0226 12:25:27.406715  747648 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0226 12:25:27.487004  747648 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0226 12:25:27.487087  747648 ssh_runner.go:195] Run: crio --version
	I0226 12:25:27.558894  747648 ssh_runner.go:195] Run: crio --version
	I0226 12:25:27.628765  747648 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0226 12:25:27.630898  747648 cli_runner.go:164] Run: docker network inspect force-systemd-flag-700637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 12:25:27.651201  747648 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0226 12:25:27.655343  747648 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 12:25:27.669800  747648 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0226 12:25:27.669864  747648 ssh_runner.go:195] Run: sudo crictl images --output json
	I0226 12:25:27.795488  747648 crio.go:496] all images are preloaded for cri-o runtime.
	I0226 12:25:27.795515  747648 crio.go:415] Images already preloaded, skipping extraction
	I0226 12:25:27.795580  747648 ssh_runner.go:195] Run: sudo crictl images --output json
	I0226 12:25:27.861078  747648 crio.go:496] all images are preloaded for cri-o runtime.
	I0226 12:25:27.861104  747648 cache_images.go:84] Images are preloaded, skipping loading
	I0226 12:25:27.861181  747648 ssh_runner.go:195] Run: crio config
	I0226 12:25:27.956004  747648 cni.go:84] Creating CNI manager for ""
	I0226 12:25:27.956030  747648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 12:25:27.956078  747648 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 12:25:27.956105  747648 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-700637 NodeName:force-systemd-flag-700637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0226 12:25:27.956315  747648 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-700637"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 12:25:27.956396  747648 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=force-systemd-flag-700637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-700637 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 12:25:27.956490  747648 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0226 12:25:27.972074  747648 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 12:25:27.972315  747648 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 12:25:27.982501  747648 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I0226 12:25:28.010281  747648 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 12:25:28.036200  747648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0226 12:25:28.059623  747648 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0226 12:25:28.064012  747648 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 12:25:28.077174  747648 certs.go:56] Setting up /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637 for IP: 192.168.67.2
	I0226 12:25:28.077254  747648 certs.go:190] acquiring lock for shared ca certs: {Name:mkc71f6ba94614715b3b8dc8b06b5f59e5f1adfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:28.077456  747648 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.key
	I0226 12:25:28.077537  747648 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.key
	I0226 12:25:28.077611  747648 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.key
	I0226 12:25:28.077647  747648 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.crt with IP's: []
	I0226 12:25:28.661915  747648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.crt ...
	I0226 12:25:28.661950  747648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.crt: {Name:mkd884a24f94a96605c816a1dfcbdd6ab967557d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:28.662629  747648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.key ...
	I0226 12:25:28.662650  747648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.key: {Name:mk10e2e8738cc939520b05e12b7688e4884c6729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:28.663251  747648 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.key.c7fa3a9e
	I0226 12:25:28.663276  747648 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0226 12:25:26.822451  742422 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.842570745s)
	I0226 12:25:26.826482  742422 logs.go:123] Gathering logs for kube-controller-manager [912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed] ...
	I0226 12:25:26.826527  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed"
	I0226 12:25:26.921105  742422 logs.go:123] Gathering logs for CRI-O ...
	I0226 12:25:26.921131  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0226 12:25:27.067849  742422 logs.go:123] Gathering logs for etcd [fa9de77e9f95fbe9d4edb231f338c859854cbad6bb067bbc9e4878c99bf3dd96] ...
	I0226 12:25:27.067928  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa9de77e9f95fbe9d4edb231f338c859854cbad6bb067bbc9e4878c99bf3dd96"
	I0226 12:25:27.154105  742422 logs.go:123] Gathering logs for etcd [e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac] ...
	I0226 12:25:27.154231  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac"
	I0226 12:25:27.230632  742422 logs.go:123] Gathering logs for kube-scheduler [c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e] ...
	I0226 12:25:27.230717  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e"
	I0226 12:25:27.307524  742422 logs.go:123] Gathering logs for kindnet [37b20f6e4c3363990b1728031036b5bb9b7d7982cc23f183eb0e6c08f2f80e9e] ...
	I0226 12:25:27.307560  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37b20f6e4c3363990b1728031036b5bb9b7d7982cc23f183eb0e6c08f2f80e9e"
	I0226 12:25:27.430207  742422 logs.go:123] Gathering logs for kindnet [b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1] ...
	I0226 12:25:27.430279  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1"
	I0226 12:25:27.575925  742422 logs.go:123] Gathering logs for kube-controller-manager [75edd03cac47e5d5d8d371f1df9d4621bfa5f06cf53611bf6dc131eb31f13816] ...
	I0226 12:25:27.575950  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75edd03cac47e5d5d8d371f1df9d4621bfa5f06cf53611bf6dc131eb31f13816"
	I0226 12:25:27.680883  742422 logs.go:123] Gathering logs for dmesg ...
	I0226 12:25:27.680908  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 12:25:27.715237  742422 logs.go:123] Gathering logs for coredns [1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c] ...
	I0226 12:25:27.715311  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c"
	I0226 12:25:27.839857  742422 logs.go:123] Gathering logs for kube-scheduler [48e1c9fe1bef8a7a1668683a53f33ed6da71035b818e348421f28421d0c375a8] ...
	I0226 12:25:27.839881  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e1c9fe1bef8a7a1668683a53f33ed6da71035b818e348421f28421d0c375a8"
	I0226 12:25:27.907549  742422 logs.go:123] Gathering logs for kube-proxy [8e4d16140ea0b33e2b235d2b4233d1de3c2760692719185ea8efc8c25e287bfc] ...
	I0226 12:25:27.907623  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e4d16140ea0b33e2b235d2b4233d1de3c2760692719185ea8efc8c25e287bfc"
	I0226 12:25:27.967584  742422 logs.go:123] Gathering logs for kube-proxy [479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7] ...
	I0226 12:25:27.967690  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7"
	I0226 12:25:28.032887  742422 logs.go:123] Gathering logs for kube-apiserver [ee1c307b25cee84a6ed691ac79bdbf66fe243ae67eeed806ab2ca9188bcc6013] ...
	I0226 12:25:28.032915  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee1c307b25cee84a6ed691ac79bdbf66fe243ae67eeed806ab2ca9188bcc6013"
	I0226 12:25:28.162058  742422 logs.go:123] Gathering logs for container status ...
	I0226 12:25:28.162138  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 12:25:30.739455  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:30.753922  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0226 12:25:30.789204  742422 api_server.go:141] control plane version: v1.28.4
	I0226 12:25:30.789239  742422 api_server.go:131] duration metric: took 1m8.537236202s to wait for apiserver health ...
	I0226 12:25:30.789250  742422 cni.go:84] Creating CNI manager for ""
	I0226 12:25:30.789257  742422 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 12:25:30.792836  742422 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0226 12:25:29.155339  747648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.crt.c7fa3a9e ...
	I0226 12:25:29.155369  747648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.crt.c7fa3a9e: {Name:mk2ca6642f6dba239225e03b5c7d36322df38943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:29.156108  747648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.key.c7fa3a9e ...
	I0226 12:25:29.156127  747648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.key.c7fa3a9e: {Name:mk28285b16100246743138c857290ff1e26bc647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:29.156222  747648 certs.go:337] copying /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.crt
	I0226 12:25:29.156313  747648 certs.go:341] copying /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.key
	I0226 12:25:29.156375  747648 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.key
	I0226 12:25:29.156392  747648 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.crt with IP's: []
	I0226 12:25:30.293819  747648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.crt ...
	I0226 12:25:30.293855  747648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.crt: {Name:mkcca1228e521afb7e235ea80ca2a660ae184ac7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:30.294624  747648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.key ...
	I0226 12:25:30.294649  747648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.key: {Name:mk28ba555e8a73381a06f10f18b14a775bcff273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:30.294750  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0226 12:25:30.294771  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0226 12:25:30.294784  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0226 12:25:30.294800  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0226 12:25:30.294812  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0226 12:25:30.294828  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0226 12:25:30.294840  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0226 12:25:30.294857  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0226 12:25:30.294922  747648 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/613988.pem (1338 bytes)
	W0226 12:25:30.294969  747648 certs.go:433] ignoring /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/613988_empty.pem, impossibly tiny 0 bytes
	I0226 12:25:30.294984  747648 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca-key.pem (1675 bytes)
	I0226 12:25:30.295012  747648 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem (1082 bytes)
	I0226 12:25:30.295041  747648 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem (1123 bytes)
	I0226 12:25:30.295072  747648 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem (1679 bytes)
	I0226 12:25:30.295124  747648 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem (1708 bytes)
	I0226 12:25:30.295168  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem -> /usr/share/ca-certificates/6139882.pem
	I0226 12:25:30.295217  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0226 12:25:30.295229  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/613988.pem -> /usr/share/ca-certificates/613988.pem
	I0226 12:25:30.295793  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 12:25:30.320460  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0226 12:25:30.345281  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 12:25:30.370546  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0226 12:25:30.396044  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 12:25:30.421867  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0226 12:25:30.447423  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 12:25:30.473743  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0226 12:25:30.500653  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem --> /usr/share/ca-certificates/6139882.pem (1708 bytes)
	I0226 12:25:30.527091  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 12:25:30.553088  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/certs/613988.pem --> /usr/share/ca-certificates/613988.pem (1338 bytes)
	I0226 12:25:30.578157  747648 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 12:25:30.596501  747648 ssh_runner.go:195] Run: openssl version
	I0226 12:25:30.602628  747648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6139882.pem && ln -fs /usr/share/ca-certificates/6139882.pem /etc/ssl/certs/6139882.pem"
	I0226 12:25:30.612292  747648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6139882.pem
	I0226 12:25:30.615855  747648 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 11:52 /usr/share/ca-certificates/6139882.pem
	I0226 12:25:30.615918  747648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6139882.pem
	I0226 12:25:30.623066  747648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6139882.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 12:25:30.632717  747648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 12:25:30.642105  747648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 12:25:30.645920  747648 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 11:45 /usr/share/ca-certificates/minikubeCA.pem
	I0226 12:25:30.646001  747648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 12:25:30.652927  747648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 12:25:30.662554  747648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/613988.pem && ln -fs /usr/share/ca-certificates/613988.pem /etc/ssl/certs/613988.pem"
	I0226 12:25:30.672361  747648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/613988.pem
	I0226 12:25:30.676131  747648 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 11:52 /usr/share/ca-certificates/613988.pem
	I0226 12:25:30.676241  747648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/613988.pem
	I0226 12:25:30.684547  747648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/613988.pem /etc/ssl/certs/51391683.0"
	I0226 12:25:30.695823  747648 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 12:25:30.699224  747648 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0226 12:25:30.699276  747648 kubeadm.go:404] StartCluster: {Name:force-systemd-flag-700637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-700637 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 12:25:30.699372  747648 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0226 12:25:30.699436  747648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0226 12:25:30.737898  747648 cri.go:89] found id: ""
	I0226 12:25:30.738027  747648 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 12:25:30.748391  747648 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 12:25:30.759666  747648 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 12:25:30.759766  747648 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 12:25:30.773548  747648 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 12:25:30.773605  747648 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 12:25:30.848849  747648 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0226 12:25:30.848951  747648 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 12:25:30.910316  747648 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0226 12:25:30.910419  747648 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0226 12:25:30.910477  747648 kubeadm.go:322] OS: Linux
	I0226 12:25:30.910545  747648 kubeadm.go:322] CGROUPS_CPU: enabled
	I0226 12:25:30.910610  747648 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0226 12:25:30.910677  747648 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0226 12:25:30.910746  747648 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0226 12:25:30.910816  747648 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0226 12:25:30.910884  747648 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0226 12:25:30.910958  747648 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0226 12:25:30.911024  747648 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0226 12:25:30.911089  747648 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0226 12:25:31.019452  747648 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 12:25:31.019623  747648 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 12:25:31.019747  747648 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 12:25:31.425068  747648 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 12:25:31.429938  747648 out.go:204]   - Generating certificates and keys ...
	I0226 12:25:31.430086  747648 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 12:25:31.430171  747648 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 12:25:31.788793  747648 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0226 12:25:32.561018  747648 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0226 12:25:32.854795  747648 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0226 12:25:33.281649  747648 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0226 12:25:30.794744  742422 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0226 12:25:30.799542  742422 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0226 12:25:30.799567  742422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0226 12:25:30.819612  742422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0226 12:25:32.204645  742422 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.384993801s)
	I0226 12:25:32.204697  742422 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 12:25:32.214059  742422 system_pods.go:59] 7 kube-system pods found
	I0226 12:25:32.214157  742422 system_pods.go:61] "coredns-5dd5756b68-jphcc" [1389dc8e-2557-486e-be8a-598958aa8372] Running
	I0226 12:25:32.214185  742422 system_pods.go:61] "etcd-pause-534129" [458dda30-3b78-4276-9529-49adfbcadc22] Running
	I0226 12:25:32.214209  742422 system_pods.go:61] "kindnet-zgq8r" [10aa1f57-33c0-4f80-b9dc-ac083e1b47c3] Running
	I0226 12:25:32.214242  742422 system_pods.go:61] "kube-apiserver-pause-534129" [c2fabc8f-4ef0-4904-88e2-61c5677dc00e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0226 12:25:32.214263  742422 system_pods.go:61] "kube-controller-manager-pause-534129" [f2e96412-fff9-40c8-bdaf-3fb61ea1f0b9] Running
	I0226 12:25:32.214290  742422 system_pods.go:61] "kube-proxy-6stnr" [ddf2146c-15dd-4280-b05f-6476a69b62a2] Running
	I0226 12:25:32.214315  742422 system_pods.go:61] "kube-scheduler-pause-534129" [d2d23bbe-5a4c-4613-9d53-baa17af001cc] Running
	I0226 12:25:32.214340  742422 system_pods.go:74] duration metric: took 9.635341ms to wait for pod list to return data ...
	I0226 12:25:32.214362  742422 node_conditions.go:102] verifying NodePressure condition ...
	I0226 12:25:32.219535  742422 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0226 12:25:32.219607  742422 node_conditions.go:123] node cpu capacity is 2
	I0226 12:25:32.219640  742422 node_conditions.go:105] duration metric: took 5.25819ms to run NodePressure ...
	I0226 12:25:32.219683  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 12:25:32.451457  742422 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0226 12:25:32.461024  742422 kubeadm.go:787] kubelet initialised
	I0226 12:25:32.461100  742422 kubeadm.go:788] duration metric: took 9.580171ms waiting for restarted kubelet to initialise ...
	I0226 12:25:32.461124  742422 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:25:32.468963  742422 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:32.479836  742422 pod_ready.go:92] pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:32.479858  742422 pod_ready.go:81] duration metric: took 10.825131ms waiting for pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:32.479873  742422 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:32.491981  742422 pod_ready.go:92] pod "etcd-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:32.492048  742422 pod_ready.go:81] duration metric: took 12.166482ms waiting for pod "etcd-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:32.492079  742422 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:34.500037  742422 pod_ready.go:102] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"False"
	I0226 12:25:33.849071  747648 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0226 12:25:33.849411  747648 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-700637 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0226 12:25:34.216088  747648 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0226 12:25:34.216439  747648 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-700637 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0226 12:25:34.624708  747648 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0226 12:25:34.971969  747648 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0226 12:25:35.407690  747648 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0226 12:25:35.408049  747648 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 12:25:35.854753  747648 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 12:25:36.237418  747648 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 12:25:36.540707  747648 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 12:25:36.988495  747648 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 12:25:36.989326  747648 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 12:25:36.992008  747648 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 12:25:36.994700  747648 out.go:204]   - Booting up control plane ...
	I0226 12:25:36.994805  747648 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 12:25:36.994880  747648 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 12:25:36.994944  747648 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 12:25:37.008734  747648 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 12:25:37.011182  747648 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 12:25:37.011461  747648 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0226 12:25:37.120388  747648 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 12:25:36.501071  742422 pod_ready.go:102] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"False"
	I0226 12:25:39.004252  742422 pod_ready.go:102] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"False"
	I0226 12:25:41.517429  742422 pod_ready.go:102] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"False"
	I0226 12:25:42.000467  742422 pod_ready.go:92] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.000554  742422 pod_ready.go:81] duration metric: took 9.508452394s waiting for pod "kube-apiserver-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.000581  742422 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.009072  742422 pod_ready.go:92] pod "kube-controller-manager-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.009098  742422 pod_ready.go:81] duration metric: took 8.495092ms waiting for pod "kube-controller-manager-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.009110  742422 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6stnr" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.018114  742422 pod_ready.go:92] pod "kube-proxy-6stnr" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.018195  742422 pod_ready.go:81] duration metric: took 9.077393ms waiting for pod "kube-proxy-6stnr" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.018223  742422 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.027431  742422 pod_ready.go:92] pod "kube-scheduler-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.027499  742422 pod_ready.go:81] duration metric: took 9.255833ms waiting for pod "kube-scheduler-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.027524  742422 pod_ready.go:38] duration metric: took 9.566372506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:25:42.027577  742422 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0226 12:25:42.039342  742422 ops.go:34] apiserver oom_adj: -16
	I0226 12:25:42.039413  742422 kubeadm.go:640] restartCluster took 1m41.422006574s
	I0226 12:25:42.039437  742422 kubeadm.go:406] StartCluster complete in 1m41.495813863s
	I0226 12:25:42.039485  742422 settings.go:142] acquiring lock: {Name:mk1588246e1eeb31f86f63cf3c470d51f6fe64da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:42.039581  742422 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 12:25:42.040374  742422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/kubeconfig: {Name:mk0efe1f972316757632066327a27c71356b5734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:42.040706  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0226 12:25:42.041053  742422 config.go:182] Loaded profile config "pause-534129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 12:25:42.041090  742422 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0226 12:25:42.043562  742422 out.go:177] * Enabled addons: 
	I0226 12:25:42.042134  742422 kapi.go:59] client config for pause-534129: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/client.crt", KeyFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/client.key", CAFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1702d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 12:25:42.045975  742422 addons.go:505] enable addons completed in 4.881948ms: enabled=[]
	I0226 12:25:42.050182  742422 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-534129" context rescaled to 1 replicas
	I0226 12:25:42.050264  742422 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0226 12:25:42.053963  742422 out.go:177] * Verifying Kubernetes components...
	I0226 12:25:42.056049  742422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 12:25:42.259970  742422 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0226 12:25:42.260021  742422 node_ready.go:35] waiting up to 6m0s for node "pause-534129" to be "Ready" ...
	I0226 12:25:42.265667  742422 node_ready.go:49] node "pause-534129" has status "Ready":"True"
	I0226 12:25:42.265689  742422 node_ready.go:38] duration metric: took 5.653477ms waiting for node "pause-534129" to be "Ready" ...
	I0226 12:25:42.265700  742422 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:25:42.277061  742422 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.396840  742422 pod_ready.go:92] pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.396867  742422 pod_ready.go:81] duration metric: took 119.730999ms waiting for pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.396880  742422 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.797380  742422 pod_ready.go:92] pod "etcd-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.797409  742422 pod_ready.go:81] duration metric: took 400.521397ms waiting for pod "etcd-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.797424  742422 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.196978  742422 pod_ready.go:92] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:43.197007  742422 pod_ready.go:81] duration metric: took 399.574923ms waiting for pod "kube-apiserver-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.197026  742422 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.597217  742422 pod_ready.go:92] pod "kube-controller-manager-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:43.597255  742422 pod_ready.go:81] duration metric: took 400.210974ms waiting for pod "kube-controller-manager-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.597267  742422 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6stnr" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.996858  742422 pod_ready.go:92] pod "kube-proxy-6stnr" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:43.996880  742422 pod_ready.go:81] duration metric: took 399.605052ms waiting for pod "kube-proxy-6stnr" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.996892  742422 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:44.396880  742422 pod_ready.go:92] pod "kube-scheduler-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:44.396954  742422 pod_ready.go:81] duration metric: took 400.052014ms waiting for pod "kube-scheduler-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:44.396981  742422 pod_ready.go:38] duration metric: took 2.131269367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:25:44.397025  742422 api_server.go:52] waiting for apiserver process to appear ...
	I0226 12:25:44.397109  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 12:25:44.409474  742422 api_server.go:72] duration metric: took 2.359155863s to wait for apiserver process to appear ...
	I0226 12:25:44.409540  742422 api_server.go:88] waiting for apiserver healthz status ...
	I0226 12:25:44.409576  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:44.418165  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0226 12:25:44.420995  742422 api_server.go:141] control plane version: v1.28.4
	I0226 12:25:44.421064  742422 api_server.go:131] duration metric: took 11.501042ms to wait for apiserver health ...
	I0226 12:25:44.421088  742422 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 12:25:44.599968  742422 system_pods.go:59] 7 kube-system pods found
	I0226 12:25:44.600002  742422 system_pods.go:61] "coredns-5dd5756b68-jphcc" [1389dc8e-2557-486e-be8a-598958aa8372] Running
	I0226 12:25:44.600008  742422 system_pods.go:61] "etcd-pause-534129" [458dda30-3b78-4276-9529-49adfbcadc22] Running
	I0226 12:25:44.600012  742422 system_pods.go:61] "kindnet-zgq8r" [10aa1f57-33c0-4f80-b9dc-ac083e1b47c3] Running
	I0226 12:25:44.600016  742422 system_pods.go:61] "kube-apiserver-pause-534129" [c2fabc8f-4ef0-4904-88e2-61c5677dc00e] Running
	I0226 12:25:44.600053  742422 system_pods.go:61] "kube-controller-manager-pause-534129" [f2e96412-fff9-40c8-bdaf-3fb61ea1f0b9] Running
	I0226 12:25:44.600064  742422 system_pods.go:61] "kube-proxy-6stnr" [ddf2146c-15dd-4280-b05f-6476a69b62a2] Running
	I0226 12:25:44.600075  742422 system_pods.go:61] "kube-scheduler-pause-534129" [d2d23bbe-5a4c-4613-9d53-baa17af001cc] Running
	I0226 12:25:44.600081  742422 system_pods.go:74] duration metric: took 178.972806ms to wait for pod list to return data ...
	I0226 12:25:44.600093  742422 default_sa.go:34] waiting for default service account to be created ...
	I0226 12:25:44.796417  742422 default_sa.go:45] found service account: "default"
	I0226 12:25:44.796446  742422 default_sa.go:55] duration metric: took 196.346157ms for default service account to be created ...
	I0226 12:25:44.796459  742422 system_pods.go:116] waiting for k8s-apps to be running ...
	I0226 12:25:45.000222  742422 system_pods.go:86] 7 kube-system pods found
	I0226 12:25:45.000264  742422 system_pods.go:89] "coredns-5dd5756b68-jphcc" [1389dc8e-2557-486e-be8a-598958aa8372] Running
	I0226 12:25:45.000272  742422 system_pods.go:89] "etcd-pause-534129" [458dda30-3b78-4276-9529-49adfbcadc22] Running
	I0226 12:25:45.000277  742422 system_pods.go:89] "kindnet-zgq8r" [10aa1f57-33c0-4f80-b9dc-ac083e1b47c3] Running
	I0226 12:25:45.000281  742422 system_pods.go:89] "kube-apiserver-pause-534129" [c2fabc8f-4ef0-4904-88e2-61c5677dc00e] Running
	I0226 12:25:45.000285  742422 system_pods.go:89] "kube-controller-manager-pause-534129" [f2e96412-fff9-40c8-bdaf-3fb61ea1f0b9] Running
	I0226 12:25:45.000289  742422 system_pods.go:89] "kube-proxy-6stnr" [ddf2146c-15dd-4280-b05f-6476a69b62a2] Running
	I0226 12:25:45.000293  742422 system_pods.go:89] "kube-scheduler-pause-534129" [d2d23bbe-5a4c-4613-9d53-baa17af001cc] Running
	I0226 12:25:45.000301  742422 system_pods.go:126] duration metric: took 203.836643ms to wait for k8s-apps to be running ...
	I0226 12:25:45.000312  742422 system_svc.go:44] waiting for kubelet service to be running ....
	I0226 12:25:45.000394  742422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 12:25:45.039346  742422 system_svc.go:56] duration metric: took 39.020604ms WaitForService to wait for kubelet.
	I0226 12:25:45.039386  742422 kubeadm.go:581] duration metric: took 2.989077825s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0226 12:25:45.039408  742422 node_conditions.go:102] verifying NodePressure condition ...
	I0226 12:25:45.197578  742422 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0226 12:25:45.197615  742422 node_conditions.go:123] node cpu capacity is 2
	I0226 12:25:45.197684  742422 node_conditions.go:105] duration metric: took 158.248997ms to run NodePressure ...
	I0226 12:25:45.197704  742422 start.go:228] waiting for startup goroutines ...
	I0226 12:25:45.197712  742422 start.go:233] waiting for cluster config update ...
	I0226 12:25:45.197726  742422 start.go:242] writing updated cluster config ...
	I0226 12:25:45.198124  742422 ssh_runner.go:195] Run: rm -f paused
	I0226 12:25:45.275137  742422 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0226 12:25:45.277518  742422 out.go:177] * Done! kubectl is now configured to use "pause-534129" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 26 12:25:22 pause-534129 crio[2309]: time="2024-02-26 12:25:22.610877939Z" level=info msg="Starting container: 45baf296ecab261085d7178b6cf2fe7d71bafc7e411fe4c0c999ff4f60f0475c" id=fb1a3210-ce9c-4f79-99e7-32d7bcd42108 name=/runtime.v1.RuntimeService/StartContainer
	Feb 26 12:25:22 pause-534129 crio[2309]: time="2024-02-26 12:25:22.626543459Z" level=info msg="Started container" PID=3293 containerID=45baf296ecab261085d7178b6cf2fe7d71bafc7e411fe4c0c999ff4f60f0475c description=kube-system/kube-apiserver-pause-534129/kube-apiserver id=fb1a3210-ce9c-4f79-99e7-32d7bcd42108 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6dda202a706c625907dabfb7013a6e495bc4f6eaa1ff4370471517c6296cafb2
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.790959440Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.848876144Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.848910834Z" level=info msg="Updated default CNI network name to kindnet"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.848928073Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.900586329Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.900618467Z" level=info msg="Updated default CNI network name to kindnet"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.900635533Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.911896450Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.911930583Z" level=info msg="Updated default CNI network name to kindnet"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.911951013Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.931981893Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.932020136Z" level=info msg="Updated default CNI network name to kindnet"
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.880372903Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=3116683a-c61e-4c9d-85d9-e41b90a8cc27 name=/runtime.v1.ImageService/ImageStatus
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.880579485Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3116683a-c61e-4c9d-85d9-e41b90a8cc27 name=/runtime.v1.ImageService/ImageStatus
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.882069960Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=72f7d7c5-5c03-4077-bdd4-a30faf6d7f87 name=/runtime.v1.ImageService/ImageStatus
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.882266295Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=72f7d7c5-5c03-4077-bdd4-a30faf6d7f87 name=/runtime.v1.ImageService/ImageStatus
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.883363509Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-jphcc/coredns" id=968b7df7-ea0a-4947-a9b3-eb69f0537daa name=/runtime.v1.RuntimeService/CreateContainer
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.883457160Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.924583914Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3d4f8b8115659a0f4a964188cedde237fdd66ea8512b02045ca101d216ab502f/merged/etc/passwd: no such file or directory"
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.924639584Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3d4f8b8115659a0f4a964188cedde237fdd66ea8512b02045ca101d216ab502f/merged/etc/group: no such file or directory"
	Feb 26 12:25:31 pause-534129 crio[2309]: time="2024-02-26 12:25:31.031539484Z" level=info msg="Created container bd5e27bc4642d5e92acaee86b08b516bd3defdae66321b7ce6f93aad0de931fc: kube-system/coredns-5dd5756b68-jphcc/coredns" id=968b7df7-ea0a-4947-a9b3-eb69f0537daa name=/runtime.v1.RuntimeService/CreateContainer
	Feb 26 12:25:31 pause-534129 crio[2309]: time="2024-02-26 12:25:31.032149066Z" level=info msg="Starting container: bd5e27bc4642d5e92acaee86b08b516bd3defdae66321b7ce6f93aad0de931fc" id=046d17f2-be9d-4782-98a6-7790642073b6 name=/runtime.v1.RuntimeService/StartContainer
	Feb 26 12:25:31 pause-534129 crio[2309]: time="2024-02-26 12:25:31.044459026Z" level=info msg="Started container" PID=3629 containerID=bd5e27bc4642d5e92acaee86b08b516bd3defdae66321b7ce6f93aad0de931fc description=kube-system/coredns-5dd5756b68-jphcc/coredns id=046d17f2-be9d-4782-98a6-7790642073b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6274b942ec4cf388887e6fc3f9e3a07398b71d79d205989e0d21c7ba374cd356
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bd5e27bc4642d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                     16 seconds ago       Running             coredns                   2                   6274b942ec4cf       coredns-5dd5756b68-jphcc
	45baf296ecab2       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                     24 seconds ago       Running             kube-apiserver            2                   6dda202a706c6       kube-apiserver-pause-534129
	48e1c9fe1bef8       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                     25 seconds ago       Running             kube-scheduler            2                   df4f7a419dbae       kube-scheduler-pause-534129
	fa9de77e9f95f       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                     25 seconds ago       Running             etcd                      2                   ebb1f1255f0ab       etcd-pause-534129
	75edd03cac47e       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                     25 seconds ago       Running             kube-controller-manager   2                   9115a7a998ae7       kube-controller-manager-pause-534129
	ee1c307b25cee       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                     About a minute ago   Exited              kube-apiserver            1                   6dda202a706c6       kube-apiserver-pause-534129
	8e4d16140ea0b       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                     About a minute ago   Running             kube-proxy                1                   4890087dcd06e       kube-proxy-6stnr
	37b20f6e4c336       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                     About a minute ago   Running             kindnet-cni               1                   40c0bd9f28028       kindnet-zgq8r
	1a648e526091c       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                     About a minute ago   Exited              coredns                   1                   6274b942ec4cf       coredns-5dd5756b68-jphcc
	c5b43e3ff09fb       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                     About a minute ago   Exited              kube-scheduler            1                   df4f7a419dbae       kube-scheduler-pause-534129
	e692a1d35b627       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                     About a minute ago   Exited              etcd                      1                   ebb1f1255f0ab       etcd-pause-534129
	912ef0446e1c8       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                     About a minute ago   Exited              kube-controller-manager   1                   9115a7a998ae7       kube-controller-manager-pause-534129
	b52d308b68e37       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988   2 minutes ago        Exited              kindnet-cni               0                   40c0bd9f28028       kindnet-zgq8r
	479bd7603543d       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                     2 minutes ago        Exited              kube-proxy                0                   4890087dcd06e       kube-proxy-6stnr
	
	
	==> coredns [1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:48089 - 44370 "HINFO IN 7986261383484638819.541528343365983935. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.024839861s
	
	
	==> coredns [bd5e27bc4642d5e92acaee86b08b516bd3defdae66321b7ce6f93aad0de931fc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60210 - 20494 "HINFO IN 8590679988724923097.5105835473569010486. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025236198s
	
	
	==> describe nodes <==
	Name:               pause-534129
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-534129
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4011915ad0e9b27ff42994854397cc2ed93516c6
	                    minikube.k8s.io/name=pause-534129
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_26T12_23_23_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Feb 2024 12:23:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-534129
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Feb 2024 12:25:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Feb 2024 12:25:26 +0000   Mon, 26 Feb 2024 12:23:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Feb 2024 12:25:26 +0000   Mon, 26 Feb 2024 12:23:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Feb 2024 12:25:26 +0000   Mon, 26 Feb 2024 12:23:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Feb 2024 12:25:26 +0000   Mon, 26 Feb 2024 12:23:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-534129
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 7cb427044f2241f5bf4bfe352f0c72af
	  System UUID:                a7bcc0c0-9543-421f-ad24-7ab72b1a4ebf
	  Boot ID:                    18acc680-2ad9-4339-83b8-bdf83df5c458
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-jphcc                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m12s
	  kube-system                 etcd-pause-534129                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m25s
	  kube-system                 kindnet-zgq8r                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m12s
	  kube-system                 kube-apiserver-pause-534129             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  kube-system                 kube-controller-manager-pause-534129    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  kube-system                 kube-proxy-6stnr                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-scheduler-pause-534129             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m10s                  kube-proxy       
	  Normal  Starting                 20s                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m34s (x8 over 2m35s)  kubelet          Node pause-534129 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m34s (x8 over 2m35s)  kubelet          Node pause-534129 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m34s (x8 over 2m35s)  kubelet          Node pause-534129 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m25s                  kubelet          Node pause-534129 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m25s                  kubelet          Node pause-534129 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m25s                  kubelet          Node pause-534129 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m12s                  node-controller  Node pause-534129 event: Registered Node pause-534129 in Controller
	  Normal  NodeReady                2m9s                   kubelet          Node pause-534129 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21s (x6 over 85s)      kubelet          Node pause-534129 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x6 over 85s)      kubelet          Node pause-534129 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x6 over 85s)      kubelet          Node pause-534129 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9s                     node-controller  Node pause-534129 event: Registered Node pause-534129 in Controller
	
	
	==> dmesg <==
	[  +0.001050] FS-Cache: O-key=[8] '34e6c90000000000'
	[  +0.000726] FS-Cache: N-cookie c=000000e3 [p=000000da fl=2 nc=0 na=1]
	[  +0.000928] FS-Cache: N-cookie d=00000000841d89c0{9p.inode} n=0000000032cf39ba
	[  +0.001073] FS-Cache: N-key=[8] '34e6c90000000000'
	[  +0.003158] FS-Cache: Duplicate cookie detected
	[  +0.000716] FS-Cache: O-cookie c=000000dd [p=000000da fl=226 nc=0 na=1]
	[  +0.001109] FS-Cache: O-cookie d=00000000841d89c0{9p.inode} n=00000000e6eb4eb1
	[  +0.001112] FS-Cache: O-key=[8] '34e6c90000000000'
	[  +0.000700] FS-Cache: N-cookie c=000000e4 [p=000000da fl=2 nc=0 na=1]
	[  +0.001083] FS-Cache: N-cookie d=00000000841d89c0{9p.inode} n=00000000ef560195
	[  +0.001212] FS-Cache: N-key=[8] '34e6c90000000000'
	[  +2.493751] FS-Cache: Duplicate cookie detected
	[  +0.000848] FS-Cache: O-cookie c=000000db [p=000000da fl=226 nc=0 na=1]
	[  +0.001068] FS-Cache: O-cookie d=00000000841d89c0{9p.inode} n=00000000bf2216f9
	[  +0.001118] FS-Cache: O-key=[8] '33e6c90000000000'
	[  +0.000721] FS-Cache: N-cookie c=000000e6 [p=000000da fl=2 nc=0 na=1]
	[  +0.001039] FS-Cache: N-cookie d=00000000841d89c0{9p.inode} n=000000004a503876
	[  +0.001168] FS-Cache: N-key=[8] '33e6c90000000000'
	[  +0.381527] FS-Cache: Duplicate cookie detected
	[  +0.000702] FS-Cache: O-cookie c=000000e0 [p=000000da fl=226 nc=0 na=1]
	[  +0.000996] FS-Cache: O-cookie d=00000000841d89c0{9p.inode} n=0000000072d896e1
	[  +0.001119] FS-Cache: O-key=[8] '39e6c90000000000'
	[  +0.000703] FS-Cache: N-cookie c=000000e7 [p=000000da fl=2 nc=0 na=1]
	[  +0.001153] FS-Cache: N-cookie d=00000000841d89c0{9p.inode} n=0000000043dc7f09
	[  +0.001179] FS-Cache: N-key=[8] '39e6c90000000000'
	
	
	==> etcd [e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac] <==
	{"level":"info","ts":"2024-02-26T12:24:05.180372Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-26T12:24:06.176724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-26T12:24:06.176844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-26T12:24:06.176904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-02-26T12:24:06.176948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2024-02-26T12:24:06.176987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-26T12:24:06.177033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2024-02-26T12:24:06.17708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-26T12:24:06.180918Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-534129 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-26T12:24:06.181117Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T12:24:06.182153Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-02-26T12:24:06.182859Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T12:24:06.183858Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-26T12:24:06.18872Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-26T12:24:06.201535Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-26T12:24:19.128815Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-26T12:24:19.128948Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-534129","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2024-02-26T12:24:19.129653Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-26T12:24:19.129811Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-26T12:24:19.20504Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-26T12:24:19.2051Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-26T12:24:19.20515Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2024-02-26T12:24:19.207397Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-26T12:24:19.207542Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-26T12:24:19.207557Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-534129","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [fa9de77e9f95fbe9d4edb231f338c859854cbad6bb067bbc9e4878c99bf3dd96] <==
	{"level":"info","ts":"2024-02-26T12:25:21.763659Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-26T12:25:21.763669Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-26T12:25:21.763898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2024-02-26T12:25:21.763965Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2024-02-26T12:25:21.764053Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-26T12:25:21.764086Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-26T12:25:21.767278Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-26T12:25:21.767457Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-26T12:25:21.767489Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-26T12:25:21.767591Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-26T12:25:21.767605Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-26T12:25:23.128703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-26T12:25:23.128752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-26T12:25:23.128778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-26T12:25:23.128792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2024-02-26T12:25:23.128802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-02-26T12:25:23.128812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2024-02-26T12:25:23.128821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-02-26T12:25:23.13294Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-534129 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-26T12:25:23.133092Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T12:25:23.137337Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-26T12:25:23.137477Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T12:25:23.138518Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-02-26T12:25:23.148695Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-26T12:25:23.148774Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:25:47 up 1 day,  1:08,  0 users,  load average: 4.29, 2.74, 2.08
	Linux pause-534129 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [37b20f6e4c3363990b1728031036b5bb9b7d7982cc23f183eb0e6c08f2f80e9e] <==
	I0226 12:24:13.219909       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0226 12:24:13.220110       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0226 12:24:13.220267       1 main.go:116] setting mtu 1500 for CNI 
	I0226 12:24:13.220310       1 main.go:146] kindnetd IP family: "ipv4"
	I0226 12:24:13.220357       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0226 12:24:13.421479       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0226 12:24:13.421736       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0226 12:24:14.422326       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0226 12:25:19.997321       1 main.go:191] Failed to get nodes, retrying after error: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I0226 12:25:26.790674       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0226 12:25:26.790714       1 main.go:227] handling current node
	I0226 12:25:36.811618       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0226 12:25:36.811652       1 main.go:227] handling current node
	I0226 12:25:46.827982       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0226 12:25:46.828015       1 main.go:227] handling current node
	
	
	==> kindnet [b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1] <==
	I0226 12:23:37.819436       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0226 12:23:37.819513       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0226 12:23:37.819622       1 main.go:116] setting mtu 1500 for CNI 
	I0226 12:23:37.819632       1 main.go:146] kindnetd IP family: "ipv4"
	I0226 12:23:37.819642       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0226 12:23:38.124477       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0226 12:23:38.124587       1 main.go:227] handling current node
	I0226 12:23:48.229277       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0226 12:23:48.229305       1 main.go:227] handling current node
	
	
	==> kube-apiserver [45baf296ecab261085d7178b6cf2fe7d71bafc7e411fe4c0c999ff4f60f0475c] <==
	I0226 12:25:26.490894       1 naming_controller.go:291] Starting NamingConditionController
	I0226 12:25:26.490940       1 establishing_controller.go:76] Starting EstablishingController
	I0226 12:25:26.490983       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0226 12:25:26.491024       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0226 12:25:26.491065       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0226 12:25:26.764539       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0226 12:25:26.770549       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0226 12:25:26.792239       1 shared_informer.go:318] Caches are synced for configmaps
	I0226 12:25:26.792334       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0226 12:25:26.799123       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0226 12:25:26.803784       1 aggregator.go:166] initial CRD sync complete...
	I0226 12:25:26.803879       1 autoregister_controller.go:141] Starting autoregister controller
	I0226 12:25:26.803928       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0226 12:25:26.803974       1 cache.go:39] Caches are synced for autoregister controller
	I0226 12:25:26.813456       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0226 12:25:26.825712       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0226 12:25:26.863589       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0226 12:25:26.867390       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0226 12:25:26.867475       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0226 12:25:27.305995       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0226 12:25:32.196154       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0226 12:25:32.346778       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0226 12:25:32.358058       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0226 12:25:32.432045       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0226 12:25:32.439574       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [ee1c307b25cee84a6ed691ac79bdbf66fe243ae67eeed806ab2ca9188bcc6013] <==
	W0226 12:25:04.727367       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:04.774852       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:05.096879       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:05.345099       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:05.544975       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:05.562169       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:05.836619       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:05.999916       1 logging.go:59] [core] [Channel #13 SubChannel #15] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:06.033465       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:06.708962       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:07.273795       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:07.496965       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:07.570220       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:07.915410       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:08.197909       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0226 12:25:10.034336       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	W0226 12:25:12.493026       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0226 12:25:15.094892       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0226 12:25:15.095003       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0226 12:25:15.096150       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0226 12:25:15.096216       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0226 12:25:15.097438       1 trace.go:236] Trace[1126869363]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:288bc4ad-1e34-4b07-b7ef-89b5a2dec07c,client:192.168.76.2,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-534129,user-agent:kubelet/v1.28.4 (linux/arm64) kubernetes/bae2c62,verb:GET (26-Feb-2024 12:25:05.095) (total time: 10001ms):
	Trace[1126869363]: [10.00199433s] [10.00199433s] END
	E0226 12:25:15.097600       1 timeout.go:142] post-timeout activity - time-elapsed: 2.567579ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-534129" result: <nil>
	F0226 12:25:19.826312       1 hooks.go:203] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [75edd03cac47e5d5d8d371f1df9d4621bfa5f06cf53611bf6dc131eb31f13816] <==
	I0226 12:25:38.204340       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0226 12:25:38.204454       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-534129"
	I0226 12:25:38.204531       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0226 12:25:38.204578       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0226 12:25:38.204664       1 taint_manager.go:210] "Sending events to api server"
	I0226 12:25:38.204994       1 event.go:307] "Event occurred" object="pause-534129" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-534129 event: Registered Node pause-534129 in Controller"
	I0226 12:25:38.207911       1 shared_informer.go:318] Caches are synced for disruption
	I0226 12:25:38.212741       1 shared_informer.go:318] Caches are synced for persistent volume
	I0226 12:25:38.216423       1 shared_informer.go:318] Caches are synced for service account
	I0226 12:25:38.220587       1 shared_informer.go:318] Caches are synced for cronjob
	I0226 12:25:38.227626       1 shared_informer.go:318] Caches are synced for PV protection
	I0226 12:25:38.231630       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0226 12:25:38.239623       1 shared_informer.go:318] Caches are synced for attach detach
	I0226 12:25:38.239658       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0226 12:25:38.251832       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0226 12:25:38.256779       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0226 12:25:38.269269       1 shared_informer.go:318] Caches are synced for resource quota
	I0226 12:25:38.322244       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0226 12:25:38.328144       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0226 12:25:38.330383       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0226 12:25:38.332698       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0226 12:25:38.334484       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0226 12:25:38.771814       1 shared_informer.go:318] Caches are synced for garbage collector
	I0226 12:25:38.774046       1 shared_informer.go:318] Caches are synced for garbage collector
	I0226 12:25:38.774095       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed] <==
	I0226 12:24:04.683624       1 serving.go:348] Generated self-signed cert in-memory
	I0226 12:24:06.394295       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0226 12:24:06.394330       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 12:24:06.395674       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0226 12:24:06.395785       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0226 12:24:06.396712       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0226 12:24:06.396792       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7] <==
	I0226 12:23:36.539290       1 server_others.go:69] "Using iptables proxy"
	I0226 12:23:36.563086       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0226 12:23:36.671888       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0226 12:23:36.673596       1 server_others.go:152] "Using iptables Proxier"
	I0226 12:23:36.673639       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0226 12:23:36.673646       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0226 12:23:36.673680       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0226 12:23:36.673882       1 server.go:846] "Version info" version="v1.28.4"
	I0226 12:23:36.673901       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 12:23:36.675123       1 config.go:188] "Starting service config controller"
	I0226 12:23:36.675138       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0226 12:23:36.675155       1 config.go:97] "Starting endpoint slice config controller"
	I0226 12:23:36.675158       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0226 12:23:36.675489       1 config.go:315] "Starting node config controller"
	I0226 12:23:36.675496       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0226 12:23:36.775410       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0226 12:23:36.775415       1 shared_informer.go:318] Caches are synced for service config
	I0226 12:23:36.775537       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [8e4d16140ea0b33e2b235d2b4233d1de3c2760692719185ea8efc8c25e287bfc] <==
	I0226 12:24:14.265156       1 server_others.go:69] "Using iptables proxy"
	E0226 12:24:14.267307       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-534129": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:15.370195       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-534129": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:25:20.001295       1 node.go:130] Failed to retrieve node info: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes pause-534129)
	I0226 12:25:26.846486       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0226 12:25:27.152295       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0226 12:25:27.154210       1 server_others.go:152] "Using iptables Proxier"
	I0226 12:25:27.154310       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0226 12:25:27.154343       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0226 12:25:27.154500       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0226 12:25:27.162207       1 server.go:846] "Version info" version="v1.28.4"
	I0226 12:25:27.163537       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 12:25:27.164376       1 config.go:188] "Starting service config controller"
	I0226 12:25:27.164453       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0226 12:25:27.164509       1 config.go:97] "Starting endpoint slice config controller"
	I0226 12:25:27.164550       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0226 12:25:27.165452       1 config.go:315] "Starting node config controller"
	I0226 12:25:27.165513       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0226 12:25:27.267464       1 shared_informer.go:318] Caches are synced for node config
	I0226 12:25:27.267593       1 shared_informer.go:318] Caches are synced for service config
	I0226 12:25:27.267607       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [48e1c9fe1bef8a7a1668683a53f33ed6da71035b818e348421f28421d0c375a8] <==
	W0226 12:25:26.682444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0226 12:25:26.685020       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0226 12:25:26.682493       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0226 12:25:26.685100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0226 12:25:26.682540       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0226 12:25:26.685176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0226 12:25:26.682592       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0226 12:25:26.685248       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0226 12:25:26.682648       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0226 12:25:26.685323       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0226 12:25:26.682761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0226 12:25:26.685400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0226 12:25:26.682830       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0226 12:25:26.685479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0226 12:25:26.682866       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0226 12:25:26.685556       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0226 12:25:26.682927       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0226 12:25:26.685631       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0226 12:25:26.682968       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0226 12:25:26.685701       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0226 12:25:26.683012       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0226 12:25:26.685789       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0226 12:25:26.683048       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0226 12:25:26.685875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0226 12:25:28.256355       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e] <==
	E0226 12:24:12.030835       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:14.682535       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:14.682666       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:14.719369       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:14.719498       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:14.739975       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:14.740090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:15.556093       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:15.556137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:15.841931       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:15.841976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:15.970307       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:15.970353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:16.098572       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:16.098614       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:16.135031       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:16.135077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:16.174737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:16.174872       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:18.967977       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0226 12:24:18.968544       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0226 12:24:18.971740       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0226 12:24:18.971798       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0226 12:24:18.972121       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0226 12:24:18.972182       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 26 12:25:22 pause-534129 kubelet[3063]: I0226 12:25:22.478807    3063 status_manager.go:853] "Failed to get status for pod" podUID="e87490c73aa544fd7b73853c2ddd5f1f" pod="kube-system/kube-controller-manager-pause-534129" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-534129\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Feb 26 12:25:22 pause-534129 kubelet[3063]: I0226 12:25:22.481038    3063 status_manager.go:853] "Failed to get status for pod" podUID="630b5db601f14e02b490489c47f27f89" pod="kube-system/kube-scheduler-pause-534129" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-534129\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Feb 26 12:25:22 pause-534129 kubelet[3063]: E0226 12:25:22.681718    3063 projected.go:198] Error preparing data for projected volume kube-api-access-vmhd7 for pod kube-system/coredns-5dd5756b68-jphcc: failed to fetch token: Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/coredns/token": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 26 12:25:22 pause-534129 kubelet[3063]: E0226 12:25:22.681795    3063 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1389dc8e-2557-486e-be8a-598958aa8372-kube-api-access-vmhd7 podName:1389dc8e-2557-486e-be8a-598958aa8372 nodeName:}" failed. No retries permitted until 2024-02-26 12:25:24.681771742 +0000 UTC m=+62.744718940 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vmhd7" (UniqueName: "kubernetes.io/projected/1389dc8e-2557-486e-be8a-598958aa8372-kube-api-access-vmhd7") pod "coredns-5dd5756b68-jphcc" (UID: "1389dc8e-2557-486e-be8a-598958aa8372") : failed to fetch token: Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/coredns/token": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 26 12:25:22 pause-534129 kubelet[3063]: E0226 12:25:22.681859    3063 projected.go:198] Error preparing data for projected volume kube-api-access-bdgjt for pod kube-system/kube-proxy-6stnr: failed to fetch token: Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 26 12:25:22 pause-534129 kubelet[3063]: E0226 12:25:22.681888    3063 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ddf2146c-15dd-4280-b05f-6476a69b62a2-kube-api-access-bdgjt podName:ddf2146c-15dd-4280-b05f-6476a69b62a2 nodeName:}" failed. No retries permitted until 2024-02-26 12:25:24.681879037 +0000 UTC m=+62.744826236 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bdgjt" (UniqueName: "kubernetes.io/projected/ddf2146c-15dd-4280-b05f-6476a69b62a2-kube-api-access-bdgjt") pod "kube-proxy-6stnr" (UID: "ddf2146c-15dd-4280-b05f-6476a69b62a2") : failed to fetch token: Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 26 12:25:22 pause-534129 kubelet[3063]: E0226 12:25:22.681939    3063 projected.go:198] Error preparing data for projected volume kube-api-access-58rcj for pod kube-system/kindnet-zgq8r: failed to fetch token: Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet/token": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 26 12:25:22 pause-534129 kubelet[3063]: E0226 12:25:22.681967    3063 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10aa1f57-33c0-4f80-b9dc-ac083e1b47c3-kube-api-access-58rcj podName:10aa1f57-33c0-4f80-b9dc-ac083e1b47c3 nodeName:}" failed. No retries permitted until 2024-02-26 12:25:24.681959133 +0000 UTC m=+62.744906332 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-58rcj" (UniqueName: "kubernetes.io/projected/10aa1f57-33c0-4f80-b9dc-ac083e1b47c3-kube-api-access-58rcj") pod "kindnet-zgq8r" (UID: "10aa1f57-33c0-4f80-b9dc-ac083e1b47c3") : failed to fetch token: Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet/token": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 26 12:25:22 pause-534129 kubelet[3063]: I0226 12:25:22.867463    3063 kubelet_node_status.go:70] "Attempting to register node" node="pause-534129"
	Feb 26 12:25:22 pause-534129 kubelet[3063]: E0226 12:25:22.867867    3063 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="pause-534129"
	Feb 26 12:25:26 pause-534129 kubelet[3063]: I0226 12:25:26.069804    3063 kubelet_node_status.go:70] "Attempting to register node" node="pause-534129"
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.394277    3063 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.394568    3063 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.394708    3063 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.713266    3063 projected.go:198] Error preparing data for projected volume kube-api-access-vmhd7 for pod kube-system/coredns-5dd5756b68-jphcc: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:pause-534129" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'pause-534129' and this object
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.713353    3063 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1389dc8e-2557-486e-be8a-598958aa8372-kube-api-access-vmhd7 podName:1389dc8e-2557-486e-be8a-598958aa8372 nodeName:}" failed. No retries permitted until 2024-02-26 12:25:30.713330435 +0000 UTC m=+68.776277642 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vmhd7" (UniqueName: "kubernetes.io/projected/1389dc8e-2557-486e-be8a-598958aa8372-kube-api-access-vmhd7") pod "coredns-5dd5756b68-jphcc" (UID: "1389dc8e-2557-486e-be8a-598958aa8372") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:pause-534129" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'pause-534129' and this object
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.713556    3063 projected.go:198] Error preparing data for projected volume kube-api-access-58rcj for pod kube-system/kindnet-zgq8r: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:pause-534129" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'pause-534129' and this object
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.713612    3063 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10aa1f57-33c0-4f80-b9dc-ac083e1b47c3-kube-api-access-58rcj podName:10aa1f57-33c0-4f80-b9dc-ac083e1b47c3 nodeName:}" failed. No retries permitted until 2024-02-26 12:25:30.713598882 +0000 UTC m=+68.776546081 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-58rcj" (UniqueName: "kubernetes.io/projected/10aa1f57-33c0-4f80-b9dc-ac083e1b47c3-kube-api-access-58rcj") pod "kindnet-zgq8r" (UID: "10aa1f57-33c0-4f80-b9dc-ac083e1b47c3") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:pause-534129" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'pause-534129' and this object
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.714049    3063 projected.go:198] Error preparing data for projected volume kube-api-access-bdgjt for pod kube-system/kube-proxy-6stnr: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:pause-534129" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'pause-534129' and this object
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.714107    3063 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ddf2146c-15dd-4280-b05f-6476a69b62a2-kube-api-access-bdgjt podName:ddf2146c-15dd-4280-b05f-6476a69b62a2 nodeName:}" failed. No retries permitted until 2024-02-26 12:25:30.714090183 +0000 UTC m=+68.777037381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bdgjt" (UniqueName: "kubernetes.io/projected/ddf2146c-15dd-4280-b05f-6476a69b62a2-kube-api-access-bdgjt") pod "kube-proxy-6stnr" (UID: "ddf2146c-15dd-4280-b05f-6476a69b62a2") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:pause-534129" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'pause-534129' and this object
	Feb 26 12:25:26 pause-534129 kubelet[3063]: I0226 12:25:26.827907    3063 kubelet_node_status.go:108] "Node was previously registered" node="pause-534129"
	Feb 26 12:25:26 pause-534129 kubelet[3063]: I0226 12:25:26.828015    3063 kubelet_node_status.go:73] "Successfully registered node" node="pause-534129"
	Feb 26 12:25:26 pause-534129 kubelet[3063]: I0226 12:25:26.836111    3063 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 26 12:25:26 pause-534129 kubelet[3063]: I0226 12:25:26.843766    3063 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 26 12:25:30 pause-534129 kubelet[3063]: I0226 12:25:30.879533    3063 scope.go:117] "RemoveContainer" containerID="1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-534129 -n pause-534129
helpers_test.go:261: (dbg) Run:  kubectl --context pause-534129 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-534129
helpers_test.go:235: (dbg) docker inspect pause-534129:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "407f1ca7602e4a32716a071088ff1960d80303110c592ded53aae3915daead03",
	        "Created": "2024-02-26T12:22:59.313548572Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 739196,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-26T12:22:59.704349543Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83c2925cbfe44b87b5c0672f16807927d87d8625e89de4dc154c45daaaa04b5b",
	        "ResolvConfPath": "/var/lib/docker/containers/407f1ca7602e4a32716a071088ff1960d80303110c592ded53aae3915daead03/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/407f1ca7602e4a32716a071088ff1960d80303110c592ded53aae3915daead03/hostname",
	        "HostsPath": "/var/lib/docker/containers/407f1ca7602e4a32716a071088ff1960d80303110c592ded53aae3915daead03/hosts",
	        "LogPath": "/var/lib/docker/containers/407f1ca7602e4a32716a071088ff1960d80303110c592ded53aae3915daead03/407f1ca7602e4a32716a071088ff1960d80303110c592ded53aae3915daead03-json.log",
	        "Name": "/pause-534129",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-534129:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-534129",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/da9b8555b43bef2db272a4e1e410ef1afbe3f774ff1d566608f87b9c0ae201c5-init/diff:/var/lib/docker/overlay2/f0e0da57c811333114b7a0181d8121ec20f9baacbcf19d34fad5038b1792b1cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da9b8555b43bef2db272a4e1e410ef1afbe3f774ff1d566608f87b9c0ae201c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da9b8555b43bef2db272a4e1e410ef1afbe3f774ff1d566608f87b9c0ae201c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da9b8555b43bef2db272a4e1e410ef1afbe3f774ff1d566608f87b9c0ae201c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-534129",
	                "Source": "/var/lib/docker/volumes/pause-534129/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-534129",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-534129",
	                "name.minikube.sigs.k8s.io": "pause-534129",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7614b7483eb66d795566f70a5b05b3169409fb71a7fe2ed6e5c62994fe0ff3ed",
	            "SandboxKey": "/var/run/docker/netns/7614b7483eb6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36996"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36995"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36992"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36994"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36993"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-534129": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "407f1ca7602e",
	                        "pause-534129"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "a3925384b61b0fa90f6ce71c7bcdb598ab12b2d4d4c59b2615f9495baf5587ce",
	                    "EndpointID": "6b097f85b8e161b1a5003ab18e6c12de169a9c35b6ea2ba835b5ad0678f6b34c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "pause-534129",
	                        "407f1ca7602e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-534129 -n pause-534129
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-534129 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-534129 logs -n 25: (2.124519626s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-737289            | NoKubernetes-737289       | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC | 26 Feb 24 12:18 UTC |
	| start   | -p NoKubernetes-737289            | NoKubernetes-737289       | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC | 26 Feb 24 12:18 UTC |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-737289 sudo       | NoKubernetes-737289       | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-269815         | missing-upgrade-269815    | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC | 26 Feb 24 12:19 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-737289            | NoKubernetes-737289       | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC | 26 Feb 24 12:18 UTC |
	| start   | -p NoKubernetes-737289            | NoKubernetes-737289       | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC | 26 Feb 24 12:18 UTC |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-737289 sudo       | NoKubernetes-737289       | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-737289            | NoKubernetes-737289       | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC | 26 Feb 24 12:18 UTC |
	| start   | -p kubernetes-upgrade-647247      | kubernetes-upgrade-647247 | jenkins | v1.32.0 | 26 Feb 24 12:18 UTC | 26 Feb 24 12:19 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-269815         | missing-upgrade-269815    | jenkins | v1.32.0 | 26 Feb 24 12:19 UTC | 26 Feb 24 12:19 UTC |
	| start   | -p stopped-upgrade-535150         | minikube                  | jenkins | v1.26.0 | 26 Feb 24 12:19 UTC | 26 Feb 24 12:20 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --vm-driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-647247      | kubernetes-upgrade-647247 | jenkins | v1.32.0 | 26 Feb 24 12:19 UTC | 26 Feb 24 12:19 UTC |
	| start   | -p kubernetes-upgrade-647247      | kubernetes-upgrade-647247 | jenkins | v1.32.0 | 26 Feb 24 12:19 UTC | 26 Feb 24 12:24 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-535150 stop       | minikube                  | jenkins | v1.26.0 | 26 Feb 24 12:20 UTC | 26 Feb 24 12:20 UTC |
	| start   | -p stopped-upgrade-535150         | stopped-upgrade-535150    | jenkins | v1.32.0 | 26 Feb 24 12:20 UTC | 26 Feb 24 12:20 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-535150         | stopped-upgrade-535150    | jenkins | v1.32.0 | 26 Feb 24 12:21 UTC | 26 Feb 24 12:21 UTC |
	| start   | -p running-upgrade-462105         | minikube                  | jenkins | v1.26.0 | 26 Feb 24 12:21 UTC | 26 Feb 24 12:21 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --vm-driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p running-upgrade-462105         | running-upgrade-462105    | jenkins | v1.32.0 | 26 Feb 24 12:21 UTC | 26 Feb 24 12:22 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-462105         | running-upgrade-462105    | jenkins | v1.32.0 | 26 Feb 24 12:22 UTC | 26 Feb 24 12:22 UTC |
	| start   | -p pause-534129 --memory=2048     | pause-534129              | jenkins | v1.32.0 | 26 Feb 24 12:22 UTC | 26 Feb 24 12:23 UTC |
	|         | --install-addons=false            |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker        |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p pause-534129                   | pause-534129              | jenkins | v1.32.0 | 26 Feb 24 12:23 UTC | 26 Feb 24 12:25 UTC |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-647247      | kubernetes-upgrade-647247 | jenkins | v1.32.0 | 26 Feb 24 12:24 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-647247      | kubernetes-upgrade-647247 | jenkins | v1.32.0 | 26 Feb 24 12:24 UTC | 26 Feb 24 12:25 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-647247      | kubernetes-upgrade-647247 | jenkins | v1.32.0 | 26 Feb 24 12:25 UTC | 26 Feb 24 12:25 UTC |
	| start   | -p force-systemd-flag-700637      | force-systemd-flag-700637 | jenkins | v1.32.0 | 26 Feb 24 12:25 UTC |                     |
	|         | --memory=2048 --force-systemd     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 12:25:13
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 12:25:13.687294  747648 out.go:291] Setting OutFile to fd 1 ...
	I0226 12:25:13.687478  747648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 12:25:13.687491  747648 out.go:304] Setting ErrFile to fd 2...
	I0226 12:25:13.687497  747648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 12:25:13.687809  747648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
	I0226 12:25:13.688290  747648 out.go:298] Setting JSON to false
	I0226 12:25:13.689396  747648 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":90460,"bootTime":1708859854,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0226 12:25:13.689475  747648 start.go:139] virtualization:  
	I0226 12:25:13.693015  747648 out.go:177] * [force-systemd-flag-700637] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0226 12:25:13.695650  747648 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 12:25:13.697508  747648 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 12:25:13.695776  747648 notify.go:220] Checking for updates...
	I0226 12:25:13.701696  747648 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 12:25:13.703738  747648 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	I0226 12:25:13.705782  747648 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0226 12:25:13.707763  747648 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 12:25:13.710688  747648 config.go:182] Loaded profile config "pause-534129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 12:25:13.710795  747648 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 12:25:13.731485  747648 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0226 12:25:13.731603  747648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 12:25:13.800826  747648 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-26 12:25:13.790657964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 12:25:13.800942  747648 docker.go:295] overlay module found
	I0226 12:25:13.803288  747648 out.go:177] * Using the docker driver based on user configuration
	I0226 12:25:13.805276  747648 start.go:299] selected driver: docker
	I0226 12:25:13.805301  747648 start.go:903] validating driver "docker" against <nil>
	I0226 12:25:13.805332  747648 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 12:25:13.806003  747648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 12:25:13.867397  747648 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-26 12:25:13.858447027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 12:25:13.867558  747648 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 12:25:13.867776  747648 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0226 12:25:13.869940  747648 out.go:177] * Using Docker driver with root privileges
	I0226 12:25:13.872256  747648 cni.go:84] Creating CNI manager for ""
	I0226 12:25:13.872283  747648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 12:25:13.872294  747648 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0226 12:25:13.872306  747648 start_flags.go:323] config:
	{Name:force-systemd-flag-700637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-700637 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 12:25:13.875608  747648 out.go:177] * Starting control plane node force-systemd-flag-700637 in cluster force-systemd-flag-700637
	I0226 12:25:13.877495  747648 cache.go:121] Beginning downloading kic base image for docker with crio
	I0226 12:25:13.879464  747648 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0226 12:25:13.881270  747648 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0226 12:25:13.881324  747648 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0226 12:25:13.881337  747648 cache.go:56] Caching tarball of preloaded images
	I0226 12:25:13.881356  747648 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 12:25:13.881430  747648 preload.go:174] Found /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0226 12:25:13.881440  747648 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0226 12:25:13.881538  747648 profile.go:148] Saving config to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/config.json ...
	I0226 12:25:13.881555  747648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/config.json: {Name:mk8f37d166b96780031b38a61c35ba31df8b188d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:13.897174  747648 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0226 12:25:13.897203  747648 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0226 12:25:13.897226  747648 cache.go:194] Successfully downloaded all kic artifacts
	I0226 12:25:13.897254  747648 start.go:365] acquiring machines lock for force-systemd-flag-700637: {Name:mk93b0e487703cd02bc1cda9f90ab0e728164928 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0226 12:25:13.897380  747648 start.go:369] acquired machines lock for "force-systemd-flag-700637" in 108.608µs
	I0226 12:25:13.897425  747648 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-700637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-700637 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0226 12:25:13.897508  747648 start.go:125] createHost starting for "" (driver="docker")
	I0226 12:25:10.494849  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:10.494894  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:10.494912  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:12.505293  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:12.505329  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:12.505376  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:14.517173  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:14.517200  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:14.517214  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:13.899687  747648 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0226 12:25:13.899990  747648 start.go:159] libmachine.API.Create for "force-systemd-flag-700637" (driver="docker")
	I0226 12:25:13.900024  747648 client.go:168] LocalClient.Create starting
	I0226 12:25:13.900098  747648 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem
	I0226 12:25:13.900140  747648 main.go:141] libmachine: Decoding PEM data...
	I0226 12:25:13.900160  747648 main.go:141] libmachine: Parsing certificate...
	I0226 12:25:13.900219  747648 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem
	I0226 12:25:13.900242  747648 main.go:141] libmachine: Decoding PEM data...
	I0226 12:25:13.900253  747648 main.go:141] libmachine: Parsing certificate...
	I0226 12:25:13.900627  747648 cli_runner.go:164] Run: docker network inspect force-systemd-flag-700637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0226 12:25:13.916123  747648 cli_runner.go:211] docker network inspect force-systemd-flag-700637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0226 12:25:13.916210  747648 network_create.go:281] running [docker network inspect force-systemd-flag-700637] to gather additional debugging logs...
	I0226 12:25:13.916226  747648 cli_runner.go:164] Run: docker network inspect force-systemd-flag-700637
	W0226 12:25:13.933643  747648 cli_runner.go:211] docker network inspect force-systemd-flag-700637 returned with exit code 1
	I0226 12:25:13.933673  747648 network_create.go:284] error running [docker network inspect force-systemd-flag-700637]: docker network inspect force-systemd-flag-700637: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-700637 not found
	I0226 12:25:13.933762  747648 network_create.go:286] output of [docker network inspect force-systemd-flag-700637]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-700637 not found
	
	** /stderr **
	I0226 12:25:13.933879  747648 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 12:25:13.949929  747648 network.go:212] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2477e72d3a54 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ec:44:d1:b6} reservation:<nil>}
	I0226 12:25:13.950285  747648 network.go:212] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9c91f8a50e5b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:f6:83:3b:99} reservation:<nil>}
	I0226 12:25:13.950777  747648 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400257a6d0}
	I0226 12:25:13.950802  747648 network_create.go:124] attempt to create docker network force-systemd-flag-700637 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0226 12:25:13.950870  747648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-700637 force-systemd-flag-700637
	I0226 12:25:14.013537  747648 network_create.go:108] docker network force-systemd-flag-700637 192.168.67.0/24 created
	I0226 12:25:14.013575  747648 kic.go:121] calculated static IP "192.168.67.2" for the "force-systemd-flag-700637" container
	I0226 12:25:14.013656  747648 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0226 12:25:14.033354  747648 cli_runner.go:164] Run: docker volume create force-systemd-flag-700637 --label name.minikube.sigs.k8s.io=force-systemd-flag-700637 --label created_by.minikube.sigs.k8s.io=true
	I0226 12:25:14.050412  747648 oci.go:103] Successfully created a docker volume force-systemd-flag-700637
	I0226 12:25:14.050499  747648 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-700637-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-700637 --entrypoint /usr/bin/test -v force-systemd-flag-700637:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0226 12:25:14.678593  747648 oci.go:107] Successfully prepared a docker volume force-systemd-flag-700637
	I0226 12:25:14.678659  747648 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0226 12:25:14.678680  747648 kic.go:194] Starting extracting preloaded images to volume ...
	I0226 12:25:14.678779  747648 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-700637:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0226 12:25:16.527569  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:16.527624  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:16.527651  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:18.538803  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0226 12:25:18.538836  742422 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0226 12:25:18.538850  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:20.041974  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": EOF
	I0226 12:25:20.042023  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:19.108174  747648 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-700637:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (4.429351659s)
	I0226 12:25:19.108214  747648 kic.go:203] duration metric: took 4.429530 seconds to extract preloaded images to volume
	W0226 12:25:19.108369  747648 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0226 12:25:19.108490  747648 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0226 12:25:19.188138  747648 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-700637 --name force-systemd-flag-700637 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-700637 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-700637 --network force-systemd-flag-700637 --ip 192.168.67.2 --volume force-systemd-flag-700637:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0226 12:25:19.551026  747648 cli_runner.go:164] Run: docker container inspect force-systemd-flag-700637 --format={{.State.Running}}
	I0226 12:25:19.569105  747648 cli_runner.go:164] Run: docker container inspect force-systemd-flag-700637 --format={{.State.Status}}
	I0226 12:25:19.592269  747648 cli_runner.go:164] Run: docker exec force-systemd-flag-700637 stat /var/lib/dpkg/alternatives/iptables
	I0226 12:25:19.675217  747648 oci.go:144] the created container "force-systemd-flag-700637" has a running status.
	I0226 12:25:19.675254  747648 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa...
	I0226 12:25:19.935834  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0226 12:25:19.935894  747648 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0226 12:25:19.986382  747648 cli_runner.go:164] Run: docker container inspect force-systemd-flag-700637 --format={{.State.Status}}
	I0226 12:25:20.023862  747648 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0226 12:25:20.023890  747648 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-700637 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0226 12:25:20.127387  747648 cli_runner.go:164] Run: docker container inspect force-systemd-flag-700637 --format={{.State.Status}}
	I0226 12:25:20.152487  747648 machine.go:88] provisioning docker machine ...
	I0226 12:25:20.152525  747648 ubuntu.go:169] provisioning hostname "force-systemd-flag-700637"
	I0226 12:25:20.152607  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:20.180844  747648 main.go:141] libmachine: Using SSH client type: native
	I0226 12:25:20.181126  747648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 37001 <nil> <nil>}
	I0226 12:25:20.181144  747648 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-700637 && echo "force-systemd-flag-700637" | sudo tee /etc/hostname
	I0226 12:25:20.181712  747648 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47360->127.0.0.1:37001: read: connection reset by peer
	I0226 12:25:23.372822  747648 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-700637
	
	I0226 12:25:23.372968  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:23.404859  747648 main.go:141] libmachine: Using SSH client type: native
	I0226 12:25:23.405112  747648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 37001 <nil> <nil>}
	I0226 12:25:23.405129  747648 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-700637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-700637/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-700637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0226 12:25:23.580972  747648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0226 12:25:23.581002  747648 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18222-608626/.minikube CaCertPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18222-608626/.minikube}
	I0226 12:25:23.581039  747648 ubuntu.go:177] setting up certificates
	I0226 12:25:23.581055  747648 provision.go:83] configureAuth start
	I0226 12:25:23.581119  747648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-700637
	I0226 12:25:23.611328  747648 provision.go:138] copyHostCerts
	I0226 12:25:23.611369  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem
	I0226 12:25:23.611402  747648 exec_runner.go:144] found /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem, removing ...
	I0226 12:25:23.611409  747648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem
	I0226 12:25:23.611485  747648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18222-608626/.minikube/ca.pem (1082 bytes)
	I0226 12:25:23.611576  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem
	I0226 12:25:23.611610  747648 exec_runner.go:144] found /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem, removing ...
	I0226 12:25:23.611615  747648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem
	I0226 12:25:23.611648  747648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18222-608626/.minikube/cert.pem (1123 bytes)
	I0226 12:25:23.611698  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem
	I0226 12:25:23.611715  747648 exec_runner.go:144] found /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem, removing ...
	I0226 12:25:23.611721  747648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem
	I0226 12:25:23.611745  747648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18222-608626/.minikube/key.pem (1679 bytes)
	I0226 12:25:23.611832  747648 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-700637 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-flag-700637]
	I0226 12:25:20.098858  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:35598->192.168.76.2:8443: read: connection reset by peer
	I0226 12:25:20.252354  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:20.252704  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0226 12:25:20.752704  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:20.753132  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0226 12:25:21.252601  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:21.253017  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0226 12:25:21.752495  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:21.752859  742422 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0226 12:25:22.252517  742422 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0226 12:25:22.252615  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0226 12:25:22.311546  742422 cri.go:89] found id: "ee1c307b25cee84a6ed691ac79bdbf66fe243ae67eeed806ab2ca9188bcc6013"
	I0226 12:25:22.311565  742422 cri.go:89] found id: ""
	I0226 12:25:22.311573  742422 logs.go:276] 1 containers: [ee1c307b25cee84a6ed691ac79bdbf66fe243ae67eeed806ab2ca9188bcc6013]
	I0226 12:25:22.311638  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.322785  742422 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0226 12:25:22.322861  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0226 12:25:22.394406  742422 cri.go:89] found id: "fa9de77e9f95fbe9d4edb231f338c859854cbad6bb067bbc9e4878c99bf3dd96"
	I0226 12:25:22.394477  742422 cri.go:89] found id: "e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac"
	I0226 12:25:22.394499  742422 cri.go:89] found id: ""
	I0226 12:25:22.394525  742422 logs.go:276] 2 containers: [fa9de77e9f95fbe9d4edb231f338c859854cbad6bb067bbc9e4878c99bf3dd96 e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac]
	I0226 12:25:22.394615  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.414049  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.435232  742422 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0226 12:25:22.435312  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0226 12:25:22.520247  742422 cri.go:89] found id: "1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c"
	I0226 12:25:22.520317  742422 cri.go:89] found id: ""
	I0226 12:25:22.520340  742422 logs.go:276] 1 containers: [1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c]
	I0226 12:25:22.520429  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.531784  742422 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0226 12:25:22.531901  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0226 12:25:22.598302  742422 cri.go:89] found id: "48e1c9fe1bef8a7a1668683a53f33ed6da71035b818e348421f28421d0c375a8"
	I0226 12:25:22.598375  742422 cri.go:89] found id: "c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e"
	I0226 12:25:22.598395  742422 cri.go:89] found id: ""
	I0226 12:25:22.598419  742422 logs.go:276] 2 containers: [48e1c9fe1bef8a7a1668683a53f33ed6da71035b818e348421f28421d0c375a8 c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e]
	I0226 12:25:22.598536  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.602463  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.605792  742422 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0226 12:25:22.605902  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0226 12:25:22.666877  742422 cri.go:89] found id: "8e4d16140ea0b33e2b235d2b4233d1de3c2760692719185ea8efc8c25e287bfc"
	I0226 12:25:22.666950  742422 cri.go:89] found id: "479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7"
	I0226 12:25:22.666968  742422 cri.go:89] found id: ""
	I0226 12:25:22.666990  742422 logs.go:276] 2 containers: [8e4d16140ea0b33e2b235d2b4233d1de3c2760692719185ea8efc8c25e287bfc 479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7]
	I0226 12:25:22.667072  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.679472  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.687441  742422 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0226 12:25:22.687556  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0226 12:25:22.751396  742422 cri.go:89] found id: "75edd03cac47e5d5d8d371f1df9d4621bfa5f06cf53611bf6dc131eb31f13816"
	I0226 12:25:22.751466  742422 cri.go:89] found id: "912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed"
	I0226 12:25:22.751485  742422 cri.go:89] found id: ""
	I0226 12:25:22.751511  742422 logs.go:276] 2 containers: [75edd03cac47e5d5d8d371f1df9d4621bfa5f06cf53611bf6dc131eb31f13816 912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed]
	I0226 12:25:22.751631  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.755363  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.758995  742422 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0226 12:25:22.759118  742422 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0226 12:25:22.807789  742422 cri.go:89] found id: "37b20f6e4c3363990b1728031036b5bb9b7d7982cc23f183eb0e6c08f2f80e9e"
	I0226 12:25:22.807849  742422 cri.go:89] found id: "b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1"
	I0226 12:25:22.807872  742422 cri.go:89] found id: ""
	I0226 12:25:22.807895  742422 logs.go:276] 2 containers: [37b20f6e4c3363990b1728031036b5bb9b7d7982cc23f183eb0e6c08f2f80e9e b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1]
	I0226 12:25:22.807980  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.811733  742422 ssh_runner.go:195] Run: which crictl
	I0226 12:25:22.815278  742422 logs.go:123] Gathering logs for kubelet ...
	I0226 12:25:22.815340  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0226 12:25:22.979815  742422 logs.go:123] Gathering logs for describe nodes ...
	I0226 12:25:22.979856  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0226 12:25:24.499507  747648 provision.go:172] copyRemoteCerts
	I0226 12:25:24.499620  747648 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0226 12:25:24.499679  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:24.520846  747648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37001 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa Username:docker}
	I0226 12:25:24.630710  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0226 12:25:24.630770  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0226 12:25:24.680317  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0226 12:25:24.680425  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0226 12:25:24.718780  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0226 12:25:24.718840  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0226 12:25:24.761641  747648 provision.go:86] duration metric: configureAuth took 1.180567857s
	I0226 12:25:24.761664  747648 ubuntu.go:193] setting minikube options for container-runtime
	I0226 12:25:24.761842  747648 config.go:182] Loaded profile config "force-systemd-flag-700637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 12:25:24.761955  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:24.799612  747648 main.go:141] libmachine: Using SSH client type: native
	I0226 12:25:24.799855  747648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 37001 <nil> <nil>}
	I0226 12:25:24.799870  747648 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0226 12:25:25.119153  747648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0226 12:25:25.119188  747648 machine.go:91] provisioned docker machine in 4.966675867s
	I0226 12:25:25.119200  747648 client.go:171] LocalClient.Create took 11.219164561s
	I0226 12:25:25.119222  747648 start.go:167] duration metric: libmachine.API.Create for "force-systemd-flag-700637" took 11.219231702s
	I0226 12:25:25.119236  747648 start.go:300] post-start starting for "force-systemd-flag-700637" (driver="docker")
	I0226 12:25:25.119248  747648 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0226 12:25:25.119352  747648 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0226 12:25:25.119409  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:25.144850  747648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37001 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa Username:docker}
	I0226 12:25:25.263461  747648 ssh_runner.go:195] Run: cat /etc/os-release
	I0226 12:25:25.273374  747648 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0226 12:25:25.273415  747648 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0226 12:25:25.273430  747648 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0226 12:25:25.273438  747648 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0226 12:25:25.273448  747648 filesync.go:126] Scanning /home/jenkins/minikube-integration/18222-608626/.minikube/addons for local assets ...
	I0226 12:25:25.273504  747648 filesync.go:126] Scanning /home/jenkins/minikube-integration/18222-608626/.minikube/files for local assets ...
	I0226 12:25:25.273596  747648 filesync.go:149] local asset: /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem -> 6139882.pem in /etc/ssl/certs
	I0226 12:25:25.273603  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem -> /etc/ssl/certs/6139882.pem
	I0226 12:25:25.273721  747648 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0226 12:25:25.288351  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem --> /etc/ssl/certs/6139882.pem (1708 bytes)
	I0226 12:25:25.319208  747648 start.go:303] post-start completed in 199.955865ms
	I0226 12:25:25.319686  747648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-700637
	I0226 12:25:25.346305  747648 profile.go:148] Saving config to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/config.json ...
	I0226 12:25:25.346609  747648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 12:25:25.346652  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:25.384842  747648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37001 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa Username:docker}
	I0226 12:25:25.493092  747648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0226 12:25:25.498669  747648 start.go:128] duration metric: createHost completed in 11.601142841s
	I0226 12:25:25.498697  747648 start.go:83] releasing machines lock for "force-systemd-flag-700637", held for 11.601303263s
	I0226 12:25:25.498768  747648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-700637
	I0226 12:25:25.516322  747648 ssh_runner.go:195] Run: cat /version.json
	I0226 12:25:25.516389  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:25.516666  747648 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0226 12:25:25.516741  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:25.548044  747648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37001 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa Username:docker}
	I0226 12:25:25.556497  747648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37001 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa Username:docker}
	I0226 12:25:25.664716  747648 ssh_runner.go:195] Run: systemctl --version
	I0226 12:25:25.801690  747648 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0226 12:25:25.962896  747648 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0226 12:25:25.967130  747648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0226 12:25:25.995272  747648 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0226 12:25:25.995346  747648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0226 12:25:26.066195  747648 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0226 12:25:26.066265  747648 start.go:475] detecting cgroup driver to use...
	I0226 12:25:26.066292  747648 start.go:479] using "systemd" cgroup driver as enforced via flags
	I0226 12:25:26.066391  747648 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0226 12:25:26.088974  747648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0226 12:25:26.107296  747648 docker.go:217] disabling cri-docker service (if available) ...
	I0226 12:25:26.107443  747648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0226 12:25:26.123671  747648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0226 12:25:26.149748  747648 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0226 12:25:26.274998  747648 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0226 12:25:26.438325  747648 docker.go:233] disabling docker service ...
	I0226 12:25:26.438465  747648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0226 12:25:26.481521  747648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0226 12:25:26.497162  747648 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0226 12:25:26.659370  747648 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0226 12:25:26.836392  747648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0226 12:25:26.849654  747648 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0226 12:25:26.874642  747648 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0226 12:25:26.874714  747648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 12:25:26.891749  747648 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0226 12:25:26.891819  747648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 12:25:26.907488  747648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 12:25:26.923651  747648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0226 12:25:26.936553  747648 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0226 12:25:26.952599  747648 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0226 12:25:26.967146  747648 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0226 12:25:26.982371  747648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0226 12:25:27.178468  747648 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0226 12:25:27.379142  747648 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0226 12:25:27.379215  747648 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0226 12:25:27.391916  747648 start.go:543] Will wait 60s for crictl version
	I0226 12:25:27.392001  747648 ssh_runner.go:195] Run: which crictl
	I0226 12:25:27.406715  747648 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0226 12:25:27.487004  747648 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0226 12:25:27.487087  747648 ssh_runner.go:195] Run: crio --version
	I0226 12:25:27.558894  747648 ssh_runner.go:195] Run: crio --version
	I0226 12:25:27.628765  747648 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0226 12:25:27.630898  747648 cli_runner.go:164] Run: docker network inspect force-systemd-flag-700637 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0226 12:25:27.651201  747648 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0226 12:25:27.655343  747648 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 12:25:27.669800  747648 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0226 12:25:27.669864  747648 ssh_runner.go:195] Run: sudo crictl images --output json
	I0226 12:25:27.795488  747648 crio.go:496] all images are preloaded for cri-o runtime.
	I0226 12:25:27.795515  747648 crio.go:415] Images already preloaded, skipping extraction
	I0226 12:25:27.795580  747648 ssh_runner.go:195] Run: sudo crictl images --output json
	I0226 12:25:27.861078  747648 crio.go:496] all images are preloaded for cri-o runtime.
	I0226 12:25:27.861104  747648 cache_images.go:84] Images are preloaded, skipping loading
	I0226 12:25:27.861181  747648 ssh_runner.go:195] Run: crio config
	I0226 12:25:27.956004  747648 cni.go:84] Creating CNI manager for ""
	I0226 12:25:27.956030  747648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 12:25:27.956078  747648 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0226 12:25:27.956105  747648 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-700637 NodeName:force-systemd-flag-700637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0226 12:25:27.956315  747648 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-700637"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0226 12:25:27.956396  747648 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=force-systemd-flag-700637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-700637 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0226 12:25:27.956490  747648 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0226 12:25:27.972074  747648 binaries.go:44] Found k8s binaries, skipping transfer
	I0226 12:25:27.972315  747648 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0226 12:25:27.982501  747648 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I0226 12:25:28.010281  747648 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0226 12:25:28.036200  747648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0226 12:25:28.059623  747648 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0226 12:25:28.064012  747648 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0226 12:25:28.077174  747648 certs.go:56] Setting up /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637 for IP: 192.168.67.2
	I0226 12:25:28.077254  747648 certs.go:190] acquiring lock for shared ca certs: {Name:mkc71f6ba94614715b3b8dc8b06b5f59e5f1adfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:28.077456  747648 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.key
	I0226 12:25:28.077537  747648 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.key
	I0226 12:25:28.077611  747648 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.key
	I0226 12:25:28.077647  747648 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.crt with IP's: []
	I0226 12:25:28.661915  747648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.crt ...
	I0226 12:25:28.661950  747648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.crt: {Name:mkd884a24f94a96605c816a1dfcbdd6ab967557d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:28.662629  747648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.key ...
	I0226 12:25:28.662650  747648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.key: {Name:mk10e2e8738cc939520b05e12b7688e4884c6729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:28.663251  747648 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.key.c7fa3a9e
	I0226 12:25:28.663276  747648 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0226 12:25:26.822451  742422 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.842570745s)
	I0226 12:25:26.826482  742422 logs.go:123] Gathering logs for kube-controller-manager [912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed] ...
	I0226 12:25:26.826527  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed"
	I0226 12:25:26.921105  742422 logs.go:123] Gathering logs for CRI-O ...
	I0226 12:25:26.921131  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0226 12:25:27.067849  742422 logs.go:123] Gathering logs for etcd [fa9de77e9f95fbe9d4edb231f338c859854cbad6bb067bbc9e4878c99bf3dd96] ...
	I0226 12:25:27.067928  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa9de77e9f95fbe9d4edb231f338c859854cbad6bb067bbc9e4878c99bf3dd96"
	I0226 12:25:27.154105  742422 logs.go:123] Gathering logs for etcd [e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac] ...
	I0226 12:25:27.154231  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac"
	I0226 12:25:27.230632  742422 logs.go:123] Gathering logs for kube-scheduler [c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e] ...
	I0226 12:25:27.230717  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e"
	I0226 12:25:27.307524  742422 logs.go:123] Gathering logs for kindnet [37b20f6e4c3363990b1728031036b5bb9b7d7982cc23f183eb0e6c08f2f80e9e] ...
	I0226 12:25:27.307560  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37b20f6e4c3363990b1728031036b5bb9b7d7982cc23f183eb0e6c08f2f80e9e"
	I0226 12:25:27.430207  742422 logs.go:123] Gathering logs for kindnet [b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1] ...
	I0226 12:25:27.430279  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1"
	I0226 12:25:27.575925  742422 logs.go:123] Gathering logs for kube-controller-manager [75edd03cac47e5d5d8d371f1df9d4621bfa5f06cf53611bf6dc131eb31f13816] ...
	I0226 12:25:27.575950  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75edd03cac47e5d5d8d371f1df9d4621bfa5f06cf53611bf6dc131eb31f13816"
	I0226 12:25:27.680883  742422 logs.go:123] Gathering logs for dmesg ...
	I0226 12:25:27.680908  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0226 12:25:27.715237  742422 logs.go:123] Gathering logs for coredns [1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c] ...
	I0226 12:25:27.715311  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c"
	I0226 12:25:27.839857  742422 logs.go:123] Gathering logs for kube-scheduler [48e1c9fe1bef8a7a1668683a53f33ed6da71035b818e348421f28421d0c375a8] ...
	I0226 12:25:27.839881  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e1c9fe1bef8a7a1668683a53f33ed6da71035b818e348421f28421d0c375a8"
	I0226 12:25:27.907549  742422 logs.go:123] Gathering logs for kube-proxy [8e4d16140ea0b33e2b235d2b4233d1de3c2760692719185ea8efc8c25e287bfc] ...
	I0226 12:25:27.907623  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e4d16140ea0b33e2b235d2b4233d1de3c2760692719185ea8efc8c25e287bfc"
	I0226 12:25:27.967584  742422 logs.go:123] Gathering logs for kube-proxy [479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7] ...
	I0226 12:25:27.967690  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7"
	I0226 12:25:28.032887  742422 logs.go:123] Gathering logs for kube-apiserver [ee1c307b25cee84a6ed691ac79bdbf66fe243ae67eeed806ab2ca9188bcc6013] ...
	I0226 12:25:28.032915  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee1c307b25cee84a6ed691ac79bdbf66fe243ae67eeed806ab2ca9188bcc6013"
	I0226 12:25:28.162058  742422 logs.go:123] Gathering logs for container status ...
	I0226 12:25:28.162138  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0226 12:25:30.739455  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:30.753922  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0226 12:25:30.789204  742422 api_server.go:141] control plane version: v1.28.4
	I0226 12:25:30.789239  742422 api_server.go:131] duration metric: took 1m8.537236202s to wait for apiserver health ...
	I0226 12:25:30.789250  742422 cni.go:84] Creating CNI manager for ""
	I0226 12:25:30.789257  742422 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 12:25:30.792836  742422 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0226 12:25:29.155339  747648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.crt.c7fa3a9e ...
	I0226 12:25:29.155369  747648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.crt.c7fa3a9e: {Name:mk2ca6642f6dba239225e03b5c7d36322df38943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:29.156108  747648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.key.c7fa3a9e ...
	I0226 12:25:29.156127  747648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.key.c7fa3a9e: {Name:mk28285b16100246743138c857290ff1e26bc647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:29.156222  747648 certs.go:337] copying /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.crt
	I0226 12:25:29.156313  747648 certs.go:341] copying /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.key
	I0226 12:25:29.156375  747648 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.key
	I0226 12:25:29.156392  747648 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.crt with IP's: []
	I0226 12:25:30.293819  747648 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.crt ...
	I0226 12:25:30.293855  747648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.crt: {Name:mkcca1228e521afb7e235ea80ca2a660ae184ac7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:30.294624  747648 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.key ...
	I0226 12:25:30.294649  747648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.key: {Name:mk28ba555e8a73381a06f10f18b14a775bcff273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:30.294750  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0226 12:25:30.294771  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0226 12:25:30.294784  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0226 12:25:30.294800  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0226 12:25:30.294812  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0226 12:25:30.294828  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0226 12:25:30.294840  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0226 12:25:30.294857  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0226 12:25:30.294922  747648 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/613988.pem (1338 bytes)
	W0226 12:25:30.294969  747648 certs.go:433] ignoring /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/613988_empty.pem, impossibly tiny 0 bytes
	I0226 12:25:30.294984  747648 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca-key.pem (1675 bytes)
	I0226 12:25:30.295012  747648 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/ca.pem (1082 bytes)
	I0226 12:25:30.295041  747648 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/cert.pem (1123 bytes)
	I0226 12:25:30.295072  747648 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/home/jenkins/minikube-integration/18222-608626/.minikube/certs/key.pem (1679 bytes)
	I0226 12:25:30.295124  747648 certs.go:437] found cert: /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem (1708 bytes)
	I0226 12:25:30.295168  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem -> /usr/share/ca-certificates/6139882.pem
	I0226 12:25:30.295217  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0226 12:25:30.295229  747648 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18222-608626/.minikube/certs/613988.pem -> /usr/share/ca-certificates/613988.pem
	I0226 12:25:30.295793  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0226 12:25:30.320460  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0226 12:25:30.345281  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0226 12:25:30.370546  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0226 12:25:30.396044  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0226 12:25:30.421867  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0226 12:25:30.447423  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0226 12:25:30.473743  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0226 12:25:30.500653  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/ssl/certs/6139882.pem --> /usr/share/ca-certificates/6139882.pem (1708 bytes)
	I0226 12:25:30.527091  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0226 12:25:30.553088  747648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18222-608626/.minikube/certs/613988.pem --> /usr/share/ca-certificates/613988.pem (1338 bytes)
	I0226 12:25:30.578157  747648 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0226 12:25:30.596501  747648 ssh_runner.go:195] Run: openssl version
	I0226 12:25:30.602628  747648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6139882.pem && ln -fs /usr/share/ca-certificates/6139882.pem /etc/ssl/certs/6139882.pem"
	I0226 12:25:30.612292  747648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6139882.pem
	I0226 12:25:30.615855  747648 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 11:52 /usr/share/ca-certificates/6139882.pem
	I0226 12:25:30.615918  747648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6139882.pem
	I0226 12:25:30.623066  747648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6139882.pem /etc/ssl/certs/3ec20f2e.0"
	I0226 12:25:30.632717  747648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0226 12:25:30.642105  747648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0226 12:25:30.645920  747648 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 11:45 /usr/share/ca-certificates/minikubeCA.pem
	I0226 12:25:30.646001  747648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0226 12:25:30.652927  747648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0226 12:25:30.662554  747648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/613988.pem && ln -fs /usr/share/ca-certificates/613988.pem /etc/ssl/certs/613988.pem"
	I0226 12:25:30.672361  747648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/613988.pem
	I0226 12:25:30.676131  747648 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 11:52 /usr/share/ca-certificates/613988.pem
	I0226 12:25:30.676241  747648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/613988.pem
	I0226 12:25:30.684547  747648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/613988.pem /etc/ssl/certs/51391683.0"
	I0226 12:25:30.695823  747648 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0226 12:25:30.699224  747648 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0226 12:25:30.699276  747648 kubeadm.go:404] StartCluster: {Name:force-systemd-flag-700637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-700637 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 12:25:30.699372  747648 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0226 12:25:30.699436  747648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0226 12:25:30.737898  747648 cri.go:89] found id: ""
	I0226 12:25:30.738027  747648 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0226 12:25:30.748391  747648 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0226 12:25:30.759666  747648 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0226 12:25:30.759766  747648 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0226 12:25:30.773548  747648 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0226 12:25:30.773605  747648 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0226 12:25:30.848849  747648 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0226 12:25:30.848951  747648 kubeadm.go:322] [preflight] Running pre-flight checks
	I0226 12:25:30.910316  747648 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0226 12:25:30.910419  747648 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0226 12:25:30.910477  747648 kubeadm.go:322] OS: Linux
	I0226 12:25:30.910545  747648 kubeadm.go:322] CGROUPS_CPU: enabled
	I0226 12:25:30.910610  747648 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0226 12:25:30.910677  747648 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0226 12:25:30.910746  747648 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0226 12:25:30.910816  747648 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0226 12:25:30.910884  747648 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0226 12:25:30.910958  747648 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0226 12:25:30.911024  747648 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0226 12:25:30.911089  747648 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0226 12:25:31.019452  747648 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0226 12:25:31.019623  747648 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0226 12:25:31.019747  747648 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0226 12:25:31.425068  747648 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0226 12:25:31.429938  747648 out.go:204]   - Generating certificates and keys ...
	I0226 12:25:31.430086  747648 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0226 12:25:31.430171  747648 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0226 12:25:31.788793  747648 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0226 12:25:32.561018  747648 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0226 12:25:32.854795  747648 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0226 12:25:33.281649  747648 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0226 12:25:30.794744  742422 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0226 12:25:30.799542  742422 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0226 12:25:30.799567  742422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0226 12:25:30.819612  742422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0226 12:25:32.204645  742422 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.384993801s)
	I0226 12:25:32.204697  742422 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 12:25:32.214059  742422 system_pods.go:59] 7 kube-system pods found
	I0226 12:25:32.214157  742422 system_pods.go:61] "coredns-5dd5756b68-jphcc" [1389dc8e-2557-486e-be8a-598958aa8372] Running
	I0226 12:25:32.214185  742422 system_pods.go:61] "etcd-pause-534129" [458dda30-3b78-4276-9529-49adfbcadc22] Running
	I0226 12:25:32.214209  742422 system_pods.go:61] "kindnet-zgq8r" [10aa1f57-33c0-4f80-b9dc-ac083e1b47c3] Running
	I0226 12:25:32.214242  742422 system_pods.go:61] "kube-apiserver-pause-534129" [c2fabc8f-4ef0-4904-88e2-61c5677dc00e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0226 12:25:32.214263  742422 system_pods.go:61] "kube-controller-manager-pause-534129" [f2e96412-fff9-40c8-bdaf-3fb61ea1f0b9] Running
	I0226 12:25:32.214290  742422 system_pods.go:61] "kube-proxy-6stnr" [ddf2146c-15dd-4280-b05f-6476a69b62a2] Running
	I0226 12:25:32.214315  742422 system_pods.go:61] "kube-scheduler-pause-534129" [d2d23bbe-5a4c-4613-9d53-baa17af001cc] Running
	I0226 12:25:32.214340  742422 system_pods.go:74] duration metric: took 9.635341ms to wait for pod list to return data ...
	I0226 12:25:32.214362  742422 node_conditions.go:102] verifying NodePressure condition ...
	I0226 12:25:32.219535  742422 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0226 12:25:32.219607  742422 node_conditions.go:123] node cpu capacity is 2
	I0226 12:25:32.219640  742422 node_conditions.go:105] duration metric: took 5.25819ms to run NodePressure ...
	I0226 12:25:32.219683  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0226 12:25:32.451457  742422 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0226 12:25:32.461024  742422 kubeadm.go:787] kubelet initialised
	I0226 12:25:32.461100  742422 kubeadm.go:788] duration metric: took 9.580171ms waiting for restarted kubelet to initialise ...
	I0226 12:25:32.461124  742422 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:25:32.468963  742422 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:32.479836  742422 pod_ready.go:92] pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:32.479858  742422 pod_ready.go:81] duration metric: took 10.825131ms waiting for pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:32.479873  742422 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:32.491981  742422 pod_ready.go:92] pod "etcd-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:32.492048  742422 pod_ready.go:81] duration metric: took 12.166482ms waiting for pod "etcd-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:32.492079  742422 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:34.500037  742422 pod_ready.go:102] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"False"
	I0226 12:25:33.849071  747648 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0226 12:25:33.849411  747648 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-700637 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0226 12:25:34.216088  747648 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0226 12:25:34.216439  747648 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-700637 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0226 12:25:34.624708  747648 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0226 12:25:34.971969  747648 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0226 12:25:35.407690  747648 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0226 12:25:35.408049  747648 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0226 12:25:35.854753  747648 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0226 12:25:36.237418  747648 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0226 12:25:36.540707  747648 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0226 12:25:36.988495  747648 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0226 12:25:36.989326  747648 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0226 12:25:36.992008  747648 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0226 12:25:36.994700  747648 out.go:204]   - Booting up control plane ...
	I0226 12:25:36.994805  747648 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0226 12:25:36.994880  747648 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0226 12:25:36.994944  747648 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0226 12:25:37.008734  747648 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0226 12:25:37.011182  747648 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0226 12:25:37.011461  747648 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0226 12:25:37.120388  747648 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0226 12:25:36.501071  742422 pod_ready.go:102] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"False"
	I0226 12:25:39.004252  742422 pod_ready.go:102] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"False"
	I0226 12:25:41.517429  742422 pod_ready.go:102] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"False"
	I0226 12:25:42.000467  742422 pod_ready.go:92] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.000554  742422 pod_ready.go:81] duration metric: took 9.508452394s waiting for pod "kube-apiserver-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.000581  742422 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.009072  742422 pod_ready.go:92] pod "kube-controller-manager-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.009098  742422 pod_ready.go:81] duration metric: took 8.495092ms waiting for pod "kube-controller-manager-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.009110  742422 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6stnr" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.018114  742422 pod_ready.go:92] pod "kube-proxy-6stnr" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.018195  742422 pod_ready.go:81] duration metric: took 9.077393ms waiting for pod "kube-proxy-6stnr" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.018223  742422 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.027431  742422 pod_ready.go:92] pod "kube-scheduler-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.027499  742422 pod_ready.go:81] duration metric: took 9.255833ms waiting for pod "kube-scheduler-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.027524  742422 pod_ready.go:38] duration metric: took 9.566372506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:25:42.027577  742422 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0226 12:25:42.039342  742422 ops.go:34] apiserver oom_adj: -16
	I0226 12:25:42.039413  742422 kubeadm.go:640] restartCluster took 1m41.422006574s
	I0226 12:25:42.039437  742422 kubeadm.go:406] StartCluster complete in 1m41.495813863s
	I0226 12:25:42.039485  742422 settings.go:142] acquiring lock: {Name:mk1588246e1eeb31f86f63cf3c470d51f6fe64da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:42.039581  742422 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 12:25:42.040374  742422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/kubeconfig: {Name:mk0efe1f972316757632066327a27c71356b5734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:42.040706  742422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0226 12:25:42.041053  742422 config.go:182] Loaded profile config "pause-534129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 12:25:42.041090  742422 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0226 12:25:42.043562  742422 out.go:177] * Enabled addons: 
	I0226 12:25:42.042134  742422 kapi.go:59] client config for pause-534129: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/client.crt", KeyFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/pause-534129/client.key", CAFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1702d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 12:25:42.045975  742422 addons.go:505] enable addons completed in 4.881948ms: enabled=[]
	I0226 12:25:42.050182  742422 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-534129" context rescaled to 1 replicas
	I0226 12:25:42.050264  742422 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0226 12:25:42.053963  742422 out.go:177] * Verifying Kubernetes components...
	I0226 12:25:42.056049  742422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 12:25:42.259970  742422 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0226 12:25:42.260021  742422 node_ready.go:35] waiting up to 6m0s for node "pause-534129" to be "Ready" ...
	I0226 12:25:42.265667  742422 node_ready.go:49] node "pause-534129" has status "Ready":"True"
	I0226 12:25:42.265689  742422 node_ready.go:38] duration metric: took 5.653477ms waiting for node "pause-534129" to be "Ready" ...
	I0226 12:25:42.265700  742422 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:25:42.277061  742422 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.396840  742422 pod_ready.go:92] pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.396867  742422 pod_ready.go:81] duration metric: took 119.730999ms waiting for pod "coredns-5dd5756b68-jphcc" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.396880  742422 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.797380  742422 pod_ready.go:92] pod "etcd-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:42.797409  742422 pod_ready.go:81] duration metric: took 400.521397ms waiting for pod "etcd-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:42.797424  742422 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.196978  742422 pod_ready.go:92] pod "kube-apiserver-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:43.197007  742422 pod_ready.go:81] duration metric: took 399.574923ms waiting for pod "kube-apiserver-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.197026  742422 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.597217  742422 pod_ready.go:92] pod "kube-controller-manager-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:43.597255  742422 pod_ready.go:81] duration metric: took 400.210974ms waiting for pod "kube-controller-manager-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.597267  742422 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6stnr" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.996858  742422 pod_ready.go:92] pod "kube-proxy-6stnr" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:43.996880  742422 pod_ready.go:81] duration metric: took 399.605052ms waiting for pod "kube-proxy-6stnr" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:43.996892  742422 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:44.396880  742422 pod_ready.go:92] pod "kube-scheduler-pause-534129" in "kube-system" namespace has status "Ready":"True"
	I0226 12:25:44.396954  742422 pod_ready.go:81] duration metric: took 400.052014ms waiting for pod "kube-scheduler-pause-534129" in "kube-system" namespace to be "Ready" ...
	I0226 12:25:44.396981  742422 pod_ready.go:38] duration metric: took 2.131269367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0226 12:25:44.397025  742422 api_server.go:52] waiting for apiserver process to appear ...
	I0226 12:25:44.397109  742422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 12:25:44.409474  742422 api_server.go:72] duration metric: took 2.359155863s to wait for apiserver process to appear ...
	I0226 12:25:44.409540  742422 api_server.go:88] waiting for apiserver healthz status ...
	I0226 12:25:44.409576  742422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0226 12:25:44.418165  742422 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0226 12:25:44.420995  742422 api_server.go:141] control plane version: v1.28.4
	I0226 12:25:44.421064  742422 api_server.go:131] duration metric: took 11.501042ms to wait for apiserver health ...
	I0226 12:25:44.421088  742422 system_pods.go:43] waiting for kube-system pods to appear ...
	I0226 12:25:44.599968  742422 system_pods.go:59] 7 kube-system pods found
	I0226 12:25:44.600002  742422 system_pods.go:61] "coredns-5dd5756b68-jphcc" [1389dc8e-2557-486e-be8a-598958aa8372] Running
	I0226 12:25:44.600008  742422 system_pods.go:61] "etcd-pause-534129" [458dda30-3b78-4276-9529-49adfbcadc22] Running
	I0226 12:25:44.600012  742422 system_pods.go:61] "kindnet-zgq8r" [10aa1f57-33c0-4f80-b9dc-ac083e1b47c3] Running
	I0226 12:25:44.600016  742422 system_pods.go:61] "kube-apiserver-pause-534129" [c2fabc8f-4ef0-4904-88e2-61c5677dc00e] Running
	I0226 12:25:44.600053  742422 system_pods.go:61] "kube-controller-manager-pause-534129" [f2e96412-fff9-40c8-bdaf-3fb61ea1f0b9] Running
	I0226 12:25:44.600064  742422 system_pods.go:61] "kube-proxy-6stnr" [ddf2146c-15dd-4280-b05f-6476a69b62a2] Running
	I0226 12:25:44.600075  742422 system_pods.go:61] "kube-scheduler-pause-534129" [d2d23bbe-5a4c-4613-9d53-baa17af001cc] Running
	I0226 12:25:44.600081  742422 system_pods.go:74] duration metric: took 178.972806ms to wait for pod list to return data ...
	I0226 12:25:44.600093  742422 default_sa.go:34] waiting for default service account to be created ...
	I0226 12:25:44.796417  742422 default_sa.go:45] found service account: "default"
	I0226 12:25:44.796446  742422 default_sa.go:55] duration metric: took 196.346157ms for default service account to be created ...
	I0226 12:25:44.796459  742422 system_pods.go:116] waiting for k8s-apps to be running ...
	I0226 12:25:45.000222  742422 system_pods.go:86] 7 kube-system pods found
	I0226 12:25:45.000264  742422 system_pods.go:89] "coredns-5dd5756b68-jphcc" [1389dc8e-2557-486e-be8a-598958aa8372] Running
	I0226 12:25:45.000272  742422 system_pods.go:89] "etcd-pause-534129" [458dda30-3b78-4276-9529-49adfbcadc22] Running
	I0226 12:25:45.000277  742422 system_pods.go:89] "kindnet-zgq8r" [10aa1f57-33c0-4f80-b9dc-ac083e1b47c3] Running
	I0226 12:25:45.000281  742422 system_pods.go:89] "kube-apiserver-pause-534129" [c2fabc8f-4ef0-4904-88e2-61c5677dc00e] Running
	I0226 12:25:45.000285  742422 system_pods.go:89] "kube-controller-manager-pause-534129" [f2e96412-fff9-40c8-bdaf-3fb61ea1f0b9] Running
	I0226 12:25:45.000289  742422 system_pods.go:89] "kube-proxy-6stnr" [ddf2146c-15dd-4280-b05f-6476a69b62a2] Running
	I0226 12:25:45.000293  742422 system_pods.go:89] "kube-scheduler-pause-534129" [d2d23bbe-5a4c-4613-9d53-baa17af001cc] Running
	I0226 12:25:45.000301  742422 system_pods.go:126] duration metric: took 203.836643ms to wait for k8s-apps to be running ...
	I0226 12:25:45.000312  742422 system_svc.go:44] waiting for kubelet service to be running ....
	I0226 12:25:45.000394  742422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 12:25:45.039346  742422 system_svc.go:56] duration metric: took 39.020604ms WaitForService to wait for kubelet.
	I0226 12:25:45.039386  742422 kubeadm.go:581] duration metric: took 2.989077825s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0226 12:25:45.039408  742422 node_conditions.go:102] verifying NodePressure condition ...
	I0226 12:25:45.197578  742422 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0226 12:25:45.197615  742422 node_conditions.go:123] node cpu capacity is 2
	I0226 12:25:45.197684  742422 node_conditions.go:105] duration metric: took 158.248997ms to run NodePressure ...
	I0226 12:25:45.197704  742422 start.go:228] waiting for startup goroutines ...
	I0226 12:25:45.197712  742422 start.go:233] waiting for cluster config update ...
	I0226 12:25:45.197726  742422 start.go:242] writing updated cluster config ...
	I0226 12:25:45.198124  742422 ssh_runner.go:195] Run: rm -f paused
	I0226 12:25:45.275137  742422 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0226 12:25:45.277518  742422 out.go:177] * Done! kubectl is now configured to use "pause-534129" cluster and "default" namespace by default
	I0226 12:25:45.130232  747648 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.008520 seconds
	I0226 12:25:45.130359  747648 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0226 12:25:45.150676  747648 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0226 12:25:45.695532  747648 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0226 12:25:45.695779  747648 kubeadm.go:322] [mark-control-plane] Marking the node force-systemd-flag-700637 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0226 12:25:46.208896  747648 kubeadm.go:322] [bootstrap-token] Using token: p3vd1w.jpwomrsq68u4xjkv
	I0226 12:25:46.212591  747648 out.go:204]   - Configuring RBAC rules ...
	I0226 12:25:46.212747  747648 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0226 12:25:46.220976  747648 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0226 12:25:46.239795  747648 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0226 12:25:46.243729  747648 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0226 12:25:46.249446  747648 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0226 12:25:46.257140  747648 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0226 12:25:46.275810  747648 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0226 12:25:46.841024  747648 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0226 12:25:47.091734  747648 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0226 12:25:47.093165  747648 kubeadm.go:322] 
	I0226 12:25:47.093242  747648 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0226 12:25:47.093253  747648 kubeadm.go:322] 
	I0226 12:25:47.093349  747648 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0226 12:25:47.093365  747648 kubeadm.go:322] 
	I0226 12:25:47.093391  747648 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0226 12:25:47.093449  747648 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0226 12:25:47.093498  747648 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0226 12:25:47.093503  747648 kubeadm.go:322] 
	I0226 12:25:47.093562  747648 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0226 12:25:47.093567  747648 kubeadm.go:322] 
	I0226 12:25:47.093616  747648 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0226 12:25:47.093621  747648 kubeadm.go:322] 
	I0226 12:25:47.093670  747648 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0226 12:25:47.093742  747648 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0226 12:25:47.093808  747648 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0226 12:25:47.093813  747648 kubeadm.go:322] 
	I0226 12:25:47.093896  747648 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0226 12:25:47.093986  747648 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0226 12:25:47.093992  747648 kubeadm.go:322] 
	I0226 12:25:47.094072  747648 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token p3vd1w.jpwomrsq68u4xjkv \
	I0226 12:25:47.094171  747648 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4951039124412052416f64387a7476aba3429f2071dfaa9a882b475b36ccdccb \
	I0226 12:25:47.094191  747648 kubeadm.go:322] 	--control-plane 
	I0226 12:25:47.094195  747648 kubeadm.go:322] 
	I0226 12:25:47.094276  747648 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0226 12:25:47.094281  747648 kubeadm.go:322] 
	I0226 12:25:47.094359  747648 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token p3vd1w.jpwomrsq68u4xjkv \
	I0226 12:25:47.094457  747648 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4951039124412052416f64387a7476aba3429f2071dfaa9a882b475b36ccdccb 
	I0226 12:25:47.099972  747648 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0226 12:25:47.100088  747648 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0226 12:25:47.100108  747648 cni.go:84] Creating CNI manager for ""
	I0226 12:25:47.100115  747648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 12:25:47.102244  747648 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0226 12:25:47.104008  747648 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0226 12:25:47.116880  747648 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0226 12:25:47.116899  747648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0226 12:25:47.145190  747648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0226 12:25:48.513834  747648 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.36860799s)
	I0226 12:25:48.513878  747648 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0226 12:25:48.514015  747648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:25:48.514099  747648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4011915ad0e9b27ff42994854397cc2ed93516c6 minikube.k8s.io/name=force-systemd-flag-700637 minikube.k8s.io/updated_at=2024_02_26T12_25_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0226 12:25:48.741191  747648 ops.go:34] apiserver oom_adj: -16
	I0226 12:25:48.741304  747648 kubeadm.go:1088] duration metric: took 227.344241ms to wait for elevateKubeSystemPrivileges.
	I0226 12:25:48.741332  747648 kubeadm.go:406] StartCluster complete in 18.042060326s
	I0226 12:25:48.741357  747648 settings.go:142] acquiring lock: {Name:mk1588246e1eeb31f86f63cf3c470d51f6fe64da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:48.741431  747648 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 12:25:48.742679  747648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18222-608626/kubeconfig: {Name:mk0efe1f972316757632066327a27c71356b5734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0226 12:25:48.742937  747648 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0226 12:25:48.743426  747648 config.go:182] Loaded profile config "force-systemd-flag-700637": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 12:25:48.743655  747648 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0226 12:25:48.743745  747648 addons.go:69] Setting storage-provisioner=true in profile "force-systemd-flag-700637"
	I0226 12:25:48.743770  747648 addons.go:234] Setting addon storage-provisioner=true in "force-systemd-flag-700637"
	I0226 12:25:48.743827  747648 host.go:66] Checking if "force-systemd-flag-700637" exists ...
	I0226 12:25:48.744583  747648 cli_runner.go:164] Run: docker container inspect force-systemd-flag-700637 --format={{.State.Status}}
	I0226 12:25:48.752625  747648 kapi.go:59] client config for force-systemd-flag-700637: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.crt", KeyFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.key", CAFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1702d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 12:25:48.752782  747648 addons.go:69] Setting default-storageclass=true in profile "force-systemd-flag-700637"
	I0226 12:25:48.753097  747648 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-flag-700637"
	I0226 12:25:48.753417  747648 cli_runner.go:164] Run: docker container inspect force-systemd-flag-700637 --format={{.State.Status}}
	I0226 12:25:48.754548  747648 cert_rotation.go:137] Starting client certificate rotation controller
	I0226 12:25:48.830712  747648 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0226 12:25:48.832540  747648 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 12:25:48.832563  747648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0226 12:25:48.832631  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:48.833338  747648 kapi.go:59] client config for force-systemd-flag-700637: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.crt", KeyFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/profiles/force-systemd-flag-700637/client.key", CAFile:"/home/jenkins/minikube-integration/18222-608626/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1702d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0226 12:25:48.834572  747648 addons.go:234] Setting addon default-storageclass=true in "force-systemd-flag-700637"
	I0226 12:25:48.834616  747648 host.go:66] Checking if "force-systemd-flag-700637" exists ...
	I0226 12:25:48.835248  747648 cli_runner.go:164] Run: docker container inspect force-systemd-flag-700637 --format={{.State.Status}}
	I0226 12:25:48.904898  747648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37001 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa Username:docker}
	I0226 12:25:48.913217  747648 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0226 12:25:48.913239  747648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0226 12:25:48.913300  747648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700637
	I0226 12:25:48.938795  747648 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0226 12:25:48.962225  747648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37001 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/force-systemd-flag-700637/id_rsa Username:docker}
	I0226 12:25:49.100138  747648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0226 12:25:49.249837  747648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0226 12:25:49.270410  747648 kapi.go:248] "coredns" deployment in "kube-system" namespace and "force-systemd-flag-700637" context rescaled to 1 replicas
	I0226 12:25:49.270454  747648 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0226 12:25:49.272920  747648 out.go:177] * Verifying Kubernetes components...
	
	
	==> CRI-O <==
	Feb 26 12:25:22 pause-534129 crio[2309]: time="2024-02-26 12:25:22.610877939Z" level=info msg="Starting container: 45baf296ecab261085d7178b6cf2fe7d71bafc7e411fe4c0c999ff4f60f0475c" id=fb1a3210-ce9c-4f79-99e7-32d7bcd42108 name=/runtime.v1.RuntimeService/StartContainer
	Feb 26 12:25:22 pause-534129 crio[2309]: time="2024-02-26 12:25:22.626543459Z" level=info msg="Started container" PID=3293 containerID=45baf296ecab261085d7178b6cf2fe7d71bafc7e411fe4c0c999ff4f60f0475c description=kube-system/kube-apiserver-pause-534129/kube-apiserver id=fb1a3210-ce9c-4f79-99e7-32d7bcd42108 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6dda202a706c625907dabfb7013a6e495bc4f6eaa1ff4370471517c6296cafb2
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.790959440Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.848876144Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.848910834Z" level=info msg="Updated default CNI network name to kindnet"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.848928073Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.900586329Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.900618467Z" level=info msg="Updated default CNI network name to kindnet"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.900635533Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.911896450Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.911930583Z" level=info msg="Updated default CNI network name to kindnet"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.911951013Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.931981893Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Feb 26 12:25:26 pause-534129 crio[2309]: time="2024-02-26 12:25:26.932020136Z" level=info msg="Updated default CNI network name to kindnet"
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.880372903Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=3116683a-c61e-4c9d-85d9-e41b90a8cc27 name=/runtime.v1.ImageService/ImageStatus
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.880579485Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3116683a-c61e-4c9d-85d9-e41b90a8cc27 name=/runtime.v1.ImageService/ImageStatus
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.882069960Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=72f7d7c5-5c03-4077-bdd4-a30faf6d7f87 name=/runtime.v1.ImageService/ImageStatus
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.882266295Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105 registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e],Size_:51393451,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=72f7d7c5-5c03-4077-bdd4-a30faf6d7f87 name=/runtime.v1.ImageService/ImageStatus
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.883363509Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-jphcc/coredns" id=968b7df7-ea0a-4947-a9b3-eb69f0537daa name=/runtime.v1.RuntimeService/CreateContainer
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.883457160Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.924583914Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3d4f8b8115659a0f4a964188cedde237fdd66ea8512b02045ca101d216ab502f/merged/etc/passwd: no such file or directory"
	Feb 26 12:25:30 pause-534129 crio[2309]: time="2024-02-26 12:25:30.924639584Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3d4f8b8115659a0f4a964188cedde237fdd66ea8512b02045ca101d216ab502f/merged/etc/group: no such file or directory"
	Feb 26 12:25:31 pause-534129 crio[2309]: time="2024-02-26 12:25:31.031539484Z" level=info msg="Created container bd5e27bc4642d5e92acaee86b08b516bd3defdae66321b7ce6f93aad0de931fc: kube-system/coredns-5dd5756b68-jphcc/coredns" id=968b7df7-ea0a-4947-a9b3-eb69f0537daa name=/runtime.v1.RuntimeService/CreateContainer
	Feb 26 12:25:31 pause-534129 crio[2309]: time="2024-02-26 12:25:31.032149066Z" level=info msg="Starting container: bd5e27bc4642d5e92acaee86b08b516bd3defdae66321b7ce6f93aad0de931fc" id=046d17f2-be9d-4782-98a6-7790642073b6 name=/runtime.v1.RuntimeService/StartContainer
	Feb 26 12:25:31 pause-534129 crio[2309]: time="2024-02-26 12:25:31.044459026Z" level=info msg="Started container" PID=3629 containerID=bd5e27bc4642d5e92acaee86b08b516bd3defdae66321b7ce6f93aad0de931fc description=kube-system/coredns-5dd5756b68-jphcc/coredns id=046d17f2-be9d-4782-98a6-7790642073b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6274b942ec4cf388887e6fc3f9e3a07398b71d79d205989e0d21c7ba374cd356
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bd5e27bc4642d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                     20 seconds ago       Running             coredns                   2                   6274b942ec4cf       coredns-5dd5756b68-jphcc
	45baf296ecab2       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                     28 seconds ago       Running             kube-apiserver            2                   6dda202a706c6       kube-apiserver-pause-534129
	48e1c9fe1bef8       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                     29 seconds ago       Running             kube-scheduler            2                   df4f7a419dbae       kube-scheduler-pause-534129
	fa9de77e9f95f       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                     29 seconds ago       Running             etcd                      2                   ebb1f1255f0ab       etcd-pause-534129
	75edd03cac47e       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                     29 seconds ago       Running             kube-controller-manager   2                   9115a7a998ae7       kube-controller-manager-pause-534129
	ee1c307b25cee       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                     About a minute ago   Exited              kube-apiserver            1                   6dda202a706c6       kube-apiserver-pause-534129
	8e4d16140ea0b       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                     About a minute ago   Running             kube-proxy                1                   4890087dcd06e       kube-proxy-6stnr
	37b20f6e4c336       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                     About a minute ago   Running             kindnet-cni               1                   40c0bd9f28028       kindnet-zgq8r
	1a648e526091c       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                     About a minute ago   Exited              coredns                   1                   6274b942ec4cf       coredns-5dd5756b68-jphcc
	c5b43e3ff09fb       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                     About a minute ago   Exited              kube-scheduler            1                   df4f7a419dbae       kube-scheduler-pause-534129
	e692a1d35b627       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                     About a minute ago   Exited              etcd                      1                   ebb1f1255f0ab       etcd-pause-534129
	912ef0446e1c8       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                     About a minute ago   Exited              kube-controller-manager   1                   9115a7a998ae7       kube-controller-manager-pause-534129
	b52d308b68e37       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988   2 minutes ago        Exited              kindnet-cni               0                   40c0bd9f28028       kindnet-zgq8r
	479bd7603543d       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                     2 minutes ago        Exited              kube-proxy                0                   4890087dcd06e       kube-proxy-6stnr
	
	
	==> coredns [1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:48089 - 44370 "HINFO IN 7986261383484638819.541528343365983935. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.024839861s
	
	
	==> coredns [bd5e27bc4642d5e92acaee86b08b516bd3defdae66321b7ce6f93aad0de931fc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60210 - 20494 "HINFO IN 8590679988724923097.5105835473569010486. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025236198s
	
	
	==> describe nodes <==
	Name:               pause-534129
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-534129
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4011915ad0e9b27ff42994854397cc2ed93516c6
	                    minikube.k8s.io/name=pause-534129
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_26T12_23_23_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Feb 2024 12:23:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-534129
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Feb 2024 12:25:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Feb 2024 12:25:26 +0000   Mon, 26 Feb 2024 12:23:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Feb 2024 12:25:26 +0000   Mon, 26 Feb 2024 12:23:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Feb 2024 12:25:26 +0000   Mon, 26 Feb 2024 12:23:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Feb 2024 12:25:26 +0000   Mon, 26 Feb 2024 12:23:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-534129
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 7cb427044f2241f5bf4bfe352f0c72af
	  System UUID:                a7bcc0c0-9543-421f-ad24-7ab72b1a4ebf
	  Boot ID:                    18acc680-2ad9-4339-83b8-bdf83df5c458
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-jphcc                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m16s
	  kube-system                 etcd-pause-534129                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m29s
	  kube-system                 kindnet-zgq8r                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m16s
	  kube-system                 kube-apiserver-pause-534129             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  kube-system                 kube-controller-manager-pause-534129    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  kube-system                 kube-proxy-6stnr                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  kube-system                 kube-scheduler-pause-534129             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m14s                  kube-proxy       
	  Normal  Starting                 24s                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m38s (x8 over 2m39s)  kubelet          Node pause-534129 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m38s (x8 over 2m39s)  kubelet          Node pause-534129 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m38s (x8 over 2m39s)  kubelet          Node pause-534129 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m29s                  kubelet          Node pause-534129 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m29s                  kubelet          Node pause-534129 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m29s                  kubelet          Node pause-534129 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m16s                  node-controller  Node pause-534129 event: Registered Node pause-534129 in Controller
	  Normal  NodeReady                2m13s                  kubelet          Node pause-534129 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  25s (x6 over 89s)      kubelet          Node pause-534129 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x6 over 89s)      kubelet          Node pause-534129 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x6 over 89s)      kubelet          Node pause-534129 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s                    node-controller  Node pause-534129 event: Registered Node pause-534129 in Controller
	
	
	==> dmesg <==
	[  +0.001050] FS-Cache: O-key=[8] '34e6c90000000000'
	[  +0.000726] FS-Cache: N-cookie c=000000e3 [p=000000da fl=2 nc=0 na=1]
	[  +0.000928] FS-Cache: N-cookie d=00000000841d89c0{9p.inode} n=0000000032cf39ba
	[  +0.001073] FS-Cache: N-key=[8] '34e6c90000000000'
	[  +0.003158] FS-Cache: Duplicate cookie detected
	[  +0.000716] FS-Cache: O-cookie c=000000dd [p=000000da fl=226 nc=0 na=1]
	[  +0.001109] FS-Cache: O-cookie d=00000000841d89c0{9p.inode} n=00000000e6eb4eb1
	[  +0.001112] FS-Cache: O-key=[8] '34e6c90000000000'
	[  +0.000700] FS-Cache: N-cookie c=000000e4 [p=000000da fl=2 nc=0 na=1]
	[  +0.001083] FS-Cache: N-cookie d=00000000841d89c0{9p.inode} n=00000000ef560195
	[  +0.001212] FS-Cache: N-key=[8] '34e6c90000000000'
	[  +2.493751] FS-Cache: Duplicate cookie detected
	[  +0.000848] FS-Cache: O-cookie c=000000db [p=000000da fl=226 nc=0 na=1]
	[  +0.001068] FS-Cache: O-cookie d=00000000841d89c0{9p.inode} n=00000000bf2216f9
	[  +0.001118] FS-Cache: O-key=[8] '33e6c90000000000'
	[  +0.000721] FS-Cache: N-cookie c=000000e6 [p=000000da fl=2 nc=0 na=1]
	[  +0.001039] FS-Cache: N-cookie d=00000000841d89c0{9p.inode} n=000000004a503876
	[  +0.001168] FS-Cache: N-key=[8] '33e6c90000000000'
	[  +0.381527] FS-Cache: Duplicate cookie detected
	[  +0.000702] FS-Cache: O-cookie c=000000e0 [p=000000da fl=226 nc=0 na=1]
	[  +0.000996] FS-Cache: O-cookie d=00000000841d89c0{9p.inode} n=0000000072d896e1
	[  +0.001119] FS-Cache: O-key=[8] '39e6c90000000000'
	[  +0.000703] FS-Cache: N-cookie c=000000e7 [p=000000da fl=2 nc=0 na=1]
	[  +0.001153] FS-Cache: N-cookie d=00000000841d89c0{9p.inode} n=0000000043dc7f09
	[  +0.001179] FS-Cache: N-key=[8] '39e6c90000000000'
	
	
	==> etcd [e692a1d35b627483aaa8f5042483e07cb08341cdec67259e2d33f150c5b565ac] <==
	{"level":"info","ts":"2024-02-26T12:24:05.180372Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-26T12:24:06.176724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-26T12:24:06.176844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-26T12:24:06.176904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-02-26T12:24:06.176948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2024-02-26T12:24:06.176987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-26T12:24:06.177033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2024-02-26T12:24:06.17708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-26T12:24:06.180918Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-534129 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-26T12:24:06.181117Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T12:24:06.182153Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-02-26T12:24:06.182859Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T12:24:06.183858Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-26T12:24:06.18872Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-26T12:24:06.201535Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-26T12:24:19.128815Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-26T12:24:19.128948Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-534129","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2024-02-26T12:24:19.129653Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-26T12:24:19.129811Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-26T12:24:19.20504Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-26T12:24:19.2051Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-26T12:24:19.20515Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2024-02-26T12:24:19.207397Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-26T12:24:19.207542Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-26T12:24:19.207557Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-534129","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [fa9de77e9f95fbe9d4edb231f338c859854cbad6bb067bbc9e4878c99bf3dd96] <==
	{"level":"info","ts":"2024-02-26T12:25:21.763659Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-26T12:25:21.763669Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-26T12:25:21.763898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2024-02-26T12:25:21.763965Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2024-02-26T12:25:21.764053Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-26T12:25:21.764086Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-26T12:25:21.767278Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-26T12:25:21.767457Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-26T12:25:21.767489Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-26T12:25:21.767591Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-26T12:25:21.767605Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-26T12:25:23.128703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-26T12:25:23.128752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-26T12:25:23.128778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-26T12:25:23.128792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2024-02-26T12:25:23.128802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-02-26T12:25:23.128812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2024-02-26T12:25:23.128821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-02-26T12:25:23.13294Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-534129 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-26T12:25:23.133092Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T12:25:23.137337Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-26T12:25:23.137477Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-26T12:25:23.138518Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-02-26T12:25:23.148695Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-26T12:25:23.148774Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:25:51 up 1 day,  1:08,  0 users,  load average: 4.91, 2.89, 2.13
	Linux pause-534129 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [37b20f6e4c3363990b1728031036b5bb9b7d7982cc23f183eb0e6c08f2f80e9e] <==
	I0226 12:24:13.219909       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0226 12:24:13.220110       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0226 12:24:13.220267       1 main.go:116] setting mtu 1500 for CNI 
	I0226 12:24:13.220310       1 main.go:146] kindnetd IP family: "ipv4"
	I0226 12:24:13.220357       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0226 12:24:13.421479       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0226 12:24:13.421736       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0226 12:24:14.422326       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0226 12:25:19.997321       1 main.go:191] Failed to get nodes, retrying after error: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	I0226 12:25:26.790674       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0226 12:25:26.790714       1 main.go:227] handling current node
	I0226 12:25:36.811618       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0226 12:25:36.811652       1 main.go:227] handling current node
	I0226 12:25:46.827982       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0226 12:25:46.828015       1 main.go:227] handling current node
	
	
	==> kindnet [b52d308b68e373c54a16b0f05bf674644288beeb14783a8454d7e55e568251a1] <==
	I0226 12:23:37.819436       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0226 12:23:37.819513       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0226 12:23:37.819622       1 main.go:116] setting mtu 1500 for CNI 
	I0226 12:23:37.819632       1 main.go:146] kindnetd IP family: "ipv4"
	I0226 12:23:37.819642       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0226 12:23:38.124477       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0226 12:23:38.124587       1 main.go:227] handling current node
	I0226 12:23:48.229277       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0226 12:23:48.229305       1 main.go:227] handling current node
	
	
	==> kube-apiserver [45baf296ecab261085d7178b6cf2fe7d71bafc7e411fe4c0c999ff4f60f0475c] <==
	I0226 12:25:26.490894       1 naming_controller.go:291] Starting NamingConditionController
	I0226 12:25:26.490940       1 establishing_controller.go:76] Starting EstablishingController
	I0226 12:25:26.490983       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0226 12:25:26.491024       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0226 12:25:26.491065       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0226 12:25:26.764539       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0226 12:25:26.770549       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0226 12:25:26.792239       1 shared_informer.go:318] Caches are synced for configmaps
	I0226 12:25:26.792334       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0226 12:25:26.799123       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0226 12:25:26.803784       1 aggregator.go:166] initial CRD sync complete...
	I0226 12:25:26.803879       1 autoregister_controller.go:141] Starting autoregister controller
	I0226 12:25:26.803928       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0226 12:25:26.803974       1 cache.go:39] Caches are synced for autoregister controller
	I0226 12:25:26.813456       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0226 12:25:26.825712       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0226 12:25:26.863589       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0226 12:25:26.867390       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0226 12:25:26.867475       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0226 12:25:27.305995       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0226 12:25:32.196154       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0226 12:25:32.346778       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0226 12:25:32.358058       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0226 12:25:32.432045       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0226 12:25:32.439574       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [ee1c307b25cee84a6ed691ac79bdbf66fe243ae67eeed806ab2ca9188bcc6013] <==
	W0226 12:25:04.727367       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:04.774852       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:05.096879       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:05.345099       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:05.544975       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:05.562169       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:05.836619       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:05.999916       1 logging.go:59] [core] [Channel #13 SubChannel #15] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:06.033465       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:06.708962       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:07.273795       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:07.496965       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:07.570220       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:07.915410       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0226 12:25:08.197909       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0226 12:25:10.034336       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	W0226 12:25:12.493026       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0226 12:25:15.094892       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0226 12:25:15.095003       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0226 12:25:15.096150       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0226 12:25:15.096216       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0226 12:25:15.097438       1 trace.go:236] Trace[1126869363]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:288bc4ad-1e34-4b07-b7ef-89b5a2dec07c,client:192.168.76.2,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-534129,user-agent:kubelet/v1.28.4 (linux/arm64) kubernetes/bae2c62,verb:GET (26-Feb-2024 12:25:05.095) (total time: 10001ms):
	Trace[1126869363]: [10.00199433s] [10.00199433s] END
	E0226 12:25:15.097600       1 timeout.go:142] post-timeout activity - time-elapsed: 2.567579ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-534129" result: <nil>
	F0226 12:25:19.826312       1 hooks.go:203] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [75edd03cac47e5d5d8d371f1df9d4621bfa5f06cf53611bf6dc131eb31f13816] <==
	I0226 12:25:38.204340       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0226 12:25:38.204454       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-534129"
	I0226 12:25:38.204531       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0226 12:25:38.204578       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0226 12:25:38.204664       1 taint_manager.go:210] "Sending events to api server"
	I0226 12:25:38.204994       1 event.go:307] "Event occurred" object="pause-534129" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-534129 event: Registered Node pause-534129 in Controller"
	I0226 12:25:38.207911       1 shared_informer.go:318] Caches are synced for disruption
	I0226 12:25:38.212741       1 shared_informer.go:318] Caches are synced for persistent volume
	I0226 12:25:38.216423       1 shared_informer.go:318] Caches are synced for service account
	I0226 12:25:38.220587       1 shared_informer.go:318] Caches are synced for cronjob
	I0226 12:25:38.227626       1 shared_informer.go:318] Caches are synced for PV protection
	I0226 12:25:38.231630       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0226 12:25:38.239623       1 shared_informer.go:318] Caches are synced for attach detach
	I0226 12:25:38.239658       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0226 12:25:38.251832       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0226 12:25:38.256779       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0226 12:25:38.269269       1 shared_informer.go:318] Caches are synced for resource quota
	I0226 12:25:38.322244       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0226 12:25:38.328144       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0226 12:25:38.330383       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0226 12:25:38.332698       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0226 12:25:38.334484       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0226 12:25:38.771814       1 shared_informer.go:318] Caches are synced for garbage collector
	I0226 12:25:38.774046       1 shared_informer.go:318] Caches are synced for garbage collector
	I0226 12:25:38.774095       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [912ef0446e1c8650cf8df5bb94c188a383e0b24fc618dceceb31328df37807ed] <==
	I0226 12:24:04.683624       1 serving.go:348] Generated self-signed cert in-memory
	I0226 12:24:06.394295       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0226 12:24:06.394330       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 12:24:06.395674       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0226 12:24:06.395785       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0226 12:24:06.396712       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0226 12:24:06.396792       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [479bd7603543de88b10da38c411af444380df239c314df144c8b43df56a90ee7] <==
	I0226 12:23:36.539290       1 server_others.go:69] "Using iptables proxy"
	I0226 12:23:36.563086       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0226 12:23:36.671888       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0226 12:23:36.673596       1 server_others.go:152] "Using iptables Proxier"
	I0226 12:23:36.673639       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0226 12:23:36.673646       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0226 12:23:36.673680       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0226 12:23:36.673882       1 server.go:846] "Version info" version="v1.28.4"
	I0226 12:23:36.673901       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 12:23:36.675123       1 config.go:188] "Starting service config controller"
	I0226 12:23:36.675138       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0226 12:23:36.675155       1 config.go:97] "Starting endpoint slice config controller"
	I0226 12:23:36.675158       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0226 12:23:36.675489       1 config.go:315] "Starting node config controller"
	I0226 12:23:36.675496       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0226 12:23:36.775410       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0226 12:23:36.775415       1 shared_informer.go:318] Caches are synced for service config
	I0226 12:23:36.775537       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [8e4d16140ea0b33e2b235d2b4233d1de3c2760692719185ea8efc8c25e287bfc] <==
	I0226 12:24:14.265156       1 server_others.go:69] "Using iptables proxy"
	E0226 12:24:14.267307       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-534129": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:15.370195       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-534129": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:25:20.001295       1 node.go:130] Failed to retrieve node info: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes pause-534129)
	I0226 12:25:26.846486       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0226 12:25:27.152295       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0226 12:25:27.154210       1 server_others.go:152] "Using iptables Proxier"
	I0226 12:25:27.154310       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0226 12:25:27.154343       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0226 12:25:27.154500       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0226 12:25:27.162207       1 server.go:846] "Version info" version="v1.28.4"
	I0226 12:25:27.163537       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0226 12:25:27.164376       1 config.go:188] "Starting service config controller"
	I0226 12:25:27.164453       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0226 12:25:27.164509       1 config.go:97] "Starting endpoint slice config controller"
	I0226 12:25:27.164550       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0226 12:25:27.165452       1 config.go:315] "Starting node config controller"
	I0226 12:25:27.165513       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0226 12:25:27.267464       1 shared_informer.go:318] Caches are synced for node config
	I0226 12:25:27.267593       1 shared_informer.go:318] Caches are synced for service config
	I0226 12:25:27.267607       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [48e1c9fe1bef8a7a1668683a53f33ed6da71035b818e348421f28421d0c375a8] <==
	W0226 12:25:26.682444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0226 12:25:26.685020       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0226 12:25:26.682493       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0226 12:25:26.685100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0226 12:25:26.682540       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0226 12:25:26.685176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0226 12:25:26.682592       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0226 12:25:26.685248       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0226 12:25:26.682648       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0226 12:25:26.685323       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0226 12:25:26.682761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0226 12:25:26.685400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0226 12:25:26.682830       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0226 12:25:26.685479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0226 12:25:26.682866       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0226 12:25:26.685556       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0226 12:25:26.682927       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0226 12:25:26.685631       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0226 12:25:26.682968       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0226 12:25:26.685701       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0226 12:25:26.683012       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0226 12:25:26.685789       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0226 12:25:26.683048       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0226 12:25:26.685875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0226 12:25:28.256355       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c5b43e3ff09fb5a03cc59819a443bf6f8e3b9f75d241d9bb954cea22458e9f1e] <==
	E0226 12:24:12.030835       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:14.682535       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:14.682666       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:14.719369       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:14.719498       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:14.739975       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:14.740090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:15.556093       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:15.556137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:15.841931       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:15.841976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.76.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:15.970307       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:15.970353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:16.098572       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:16.098614       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.76.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:16.135031       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:16.135077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0226 12:24:16.174737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:16.174872       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.76.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0226 12:24:18.967977       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0226 12:24:18.968544       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0226 12:24:18.971740       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0226 12:24:18.971798       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0226 12:24:18.972121       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0226 12:24:18.972182       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 26 12:25:22 pause-534129 kubelet[3063]: I0226 12:25:22.478807    3063 status_manager.go:853] "Failed to get status for pod" podUID="e87490c73aa544fd7b73853c2ddd5f1f" pod="kube-system/kube-controller-manager-pause-534129" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-534129\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Feb 26 12:25:22 pause-534129 kubelet[3063]: I0226 12:25:22.481038    3063 status_manager.go:853] "Failed to get status for pod" podUID="630b5db601f14e02b490489c47f27f89" pod="kube-system/kube-scheduler-pause-534129" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-534129\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Feb 26 12:25:22 pause-534129 kubelet[3063]: E0226 12:25:22.681718    3063 projected.go:198] Error preparing data for projected volume kube-api-access-vmhd7 for pod kube-system/coredns-5dd5756b68-jphcc: failed to fetch token: Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/coredns/token": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 26 12:25:22 pause-534129 kubelet[3063]: E0226 12:25:22.681795    3063 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1389dc8e-2557-486e-be8a-598958aa8372-kube-api-access-vmhd7 podName:1389dc8e-2557-486e-be8a-598958aa8372 nodeName:}" failed. No retries permitted until 2024-02-26 12:25:24.681771742 +0000 UTC m=+62.744718940 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vmhd7" (UniqueName: "kubernetes.io/projected/1389dc8e-2557-486e-be8a-598958aa8372-kube-api-access-vmhd7") pod "coredns-5dd5756b68-jphcc" (UID: "1389dc8e-2557-486e-be8a-598958aa8372") : failed to fetch token: Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/coredns/token": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 26 12:25:22 pause-534129 kubelet[3063]: E0226 12:25:22.681859    3063 projected.go:198] Error preparing data for projected volume kube-api-access-bdgjt for pod kube-system/kube-proxy-6stnr: failed to fetch token: Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 26 12:25:22 pause-534129 kubelet[3063]: E0226 12:25:22.681888    3063 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ddf2146c-15dd-4280-b05f-6476a69b62a2-kube-api-access-bdgjt podName:ddf2146c-15dd-4280-b05f-6476a69b62a2 nodeName:}" failed. No retries permitted until 2024-02-26 12:25:24.681879037 +0000 UTC m=+62.744826236 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bdgjt" (UniqueName: "kubernetes.io/projected/ddf2146c-15dd-4280-b05f-6476a69b62a2-kube-api-access-bdgjt") pod "kube-proxy-6stnr" (UID: "ddf2146c-15dd-4280-b05f-6476a69b62a2") : failed to fetch token: Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 26 12:25:22 pause-534129 kubelet[3063]: E0226 12:25:22.681939    3063 projected.go:198] Error preparing data for projected volume kube-api-access-58rcj for pod kube-system/kindnet-zgq8r: failed to fetch token: Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet/token": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 26 12:25:22 pause-534129 kubelet[3063]: E0226 12:25:22.681967    3063 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10aa1f57-33c0-4f80-b9dc-ac083e1b47c3-kube-api-access-58rcj podName:10aa1f57-33c0-4f80-b9dc-ac083e1b47c3 nodeName:}" failed. No retries permitted until 2024-02-26 12:25:24.681959133 +0000 UTC m=+62.744906332 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-58rcj" (UniqueName: "kubernetes.io/projected/10aa1f57-33c0-4f80-b9dc-ac083e1b47c3-kube-api-access-58rcj") pod "kindnet-zgq8r" (UID: "10aa1f57-33c0-4f80-b9dc-ac083e1b47c3") : failed to fetch token: Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet/token": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 26 12:25:22 pause-534129 kubelet[3063]: I0226 12:25:22.867463    3063 kubelet_node_status.go:70] "Attempting to register node" node="pause-534129"
	Feb 26 12:25:22 pause-534129 kubelet[3063]: E0226 12:25:22.867867    3063 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="pause-534129"
	Feb 26 12:25:26 pause-534129 kubelet[3063]: I0226 12:25:26.069804    3063 kubelet_node_status.go:70] "Attempting to register node" node="pause-534129"
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.394277    3063 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.394568    3063 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.394708    3063 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.713266    3063 projected.go:198] Error preparing data for projected volume kube-api-access-vmhd7 for pod kube-system/coredns-5dd5756b68-jphcc: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:pause-534129" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'pause-534129' and this object
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.713353    3063 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1389dc8e-2557-486e-be8a-598958aa8372-kube-api-access-vmhd7 podName:1389dc8e-2557-486e-be8a-598958aa8372 nodeName:}" failed. No retries permitted until 2024-02-26 12:25:30.713330435 +0000 UTC m=+68.776277642 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vmhd7" (UniqueName: "kubernetes.io/projected/1389dc8e-2557-486e-be8a-598958aa8372-kube-api-access-vmhd7") pod "coredns-5dd5756b68-jphcc" (UID: "1389dc8e-2557-486e-be8a-598958aa8372") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:pause-534129" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'pause-534129' and this object
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.713556    3063 projected.go:198] Error preparing data for projected volume kube-api-access-58rcj for pod kube-system/kindnet-zgq8r: failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:pause-534129" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'pause-534129' and this object
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.713612    3063 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10aa1f57-33c0-4f80-b9dc-ac083e1b47c3-kube-api-access-58rcj podName:10aa1f57-33c0-4f80-b9dc-ac083e1b47c3 nodeName:}" failed. No retries permitted until 2024-02-26 12:25:30.713598882 +0000 UTC m=+68.776546081 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-58rcj" (UniqueName: "kubernetes.io/projected/10aa1f57-33c0-4f80-b9dc-ac083e1b47c3-kube-api-access-58rcj") pod "kindnet-zgq8r" (UID: "10aa1f57-33c0-4f80-b9dc-ac083e1b47c3") : failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:pause-534129" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'pause-534129' and this object
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.714049    3063 projected.go:198] Error preparing data for projected volume kube-api-access-bdgjt for pod kube-system/kube-proxy-6stnr: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:pause-534129" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'pause-534129' and this object
	Feb 26 12:25:26 pause-534129 kubelet[3063]: E0226 12:25:26.714107    3063 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ddf2146c-15dd-4280-b05f-6476a69b62a2-kube-api-access-bdgjt podName:ddf2146c-15dd-4280-b05f-6476a69b62a2 nodeName:}" failed. No retries permitted until 2024-02-26 12:25:30.714090183 +0000 UTC m=+68.777037381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bdgjt" (UniqueName: "kubernetes.io/projected/ddf2146c-15dd-4280-b05f-6476a69b62a2-kube-api-access-bdgjt") pod "kube-proxy-6stnr" (UID: "ddf2146c-15dd-4280-b05f-6476a69b62a2") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:pause-534129" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'pause-534129' and this object
	Feb 26 12:25:26 pause-534129 kubelet[3063]: I0226 12:25:26.827907    3063 kubelet_node_status.go:108] "Node was previously registered" node="pause-534129"
	Feb 26 12:25:26 pause-534129 kubelet[3063]: I0226 12:25:26.828015    3063 kubelet_node_status.go:73] "Successfully registered node" node="pause-534129"
	Feb 26 12:25:26 pause-534129 kubelet[3063]: I0226 12:25:26.836111    3063 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 26 12:25:26 pause-534129 kubelet[3063]: I0226 12:25:26.843766    3063 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 26 12:25:30 pause-534129 kubelet[3063]: I0226 12:25:30.879533    3063 scope.go:117] "RemoveContainer" containerID="1a648e526091cdc707c396f0b02b6bcb82164874055edf9cb442bc97bf42e82c"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-534129 -n pause-534129
helpers_test.go:261: (dbg) Run:  kubectl --context pause-534129 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (122.74s)

                                                
                                    

Test pass (279/314)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.05
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.38
9 TestDownloadOnly/v1.16.0/DeleteAll 0.4
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.25
12 TestDownloadOnly/v1.28.4/json-events 9.36
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.09
18 TestDownloadOnly/v1.28.4/DeleteAll 0.22
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.16
21 TestDownloadOnly/v1.29.0-rc.2/json-events 8.91
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.29
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.37
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.24
30 TestBinaryMirror 0.56
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 166.25
38 TestAddons/parallel/Registry 15.78
40 TestAddons/parallel/InspektorGadget 11.88
41 TestAddons/parallel/MetricsServer 6.89
44 TestAddons/parallel/CSI 67.05
45 TestAddons/parallel/Headlamp 11.49
46 TestAddons/parallel/CloudSpanner 6.6
47 TestAddons/parallel/LocalPath 53.32
48 TestAddons/parallel/NvidiaDevicePlugin 6.51
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.18
53 TestAddons/StoppedEnableDisable 12.27
54 TestCertOptions 37.22
55 TestCertExpiration 245.7
57 TestForceSystemdFlag 39.81
58 TestForceSystemdEnv 41.91
64 TestErrorSpam/setup 31.35
65 TestErrorSpam/start 0.78
66 TestErrorSpam/status 1.06
67 TestErrorSpam/pause 1.72
68 TestErrorSpam/unpause 1.84
69 TestErrorSpam/stop 1.47
72 TestFunctional/serial/CopySyncFile 0.01
73 TestFunctional/serial/StartWithProxy 48.73
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 40.32
76 TestFunctional/serial/KubeContext 0.07
77 TestFunctional/serial/KubectlGetPods 0.11
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.53
81 TestFunctional/serial/CacheCmd/cache/add_local 1.08
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.98
86 TestFunctional/serial/CacheCmd/cache/delete 0.15
87 TestFunctional/serial/MinikubeKubectlCmd 0.14
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
89 TestFunctional/serial/ExtraConfig 34.35
90 TestFunctional/serial/ComponentHealth 0.11
91 TestFunctional/serial/LogsCmd 1.7
92 TestFunctional/serial/LogsFileCmd 1.73
93 TestFunctional/serial/InvalidService 4.31
95 TestFunctional/parallel/ConfigCmd 0.6
96 TestFunctional/parallel/DashboardCmd 13.28
97 TestFunctional/parallel/DryRun 0.47
98 TestFunctional/parallel/InternationalLanguage 0.2
99 TestFunctional/parallel/StatusCmd 1.08
103 TestFunctional/parallel/ServiceCmdConnect 10.62
104 TestFunctional/parallel/AddonsCmd 0.17
105 TestFunctional/parallel/PersistentVolumeClaim 25.9
107 TestFunctional/parallel/SSHCmd 0.69
108 TestFunctional/parallel/CpCmd 2.21
110 TestFunctional/parallel/FileSync 0.39
111 TestFunctional/parallel/CertSync 2.09
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.78
119 TestFunctional/parallel/License 0.35
120 TestFunctional/parallel/Version/short 0.07
121 TestFunctional/parallel/Version/components 1.18
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
126 TestFunctional/parallel/ImageCommands/ImageBuild 2.59
127 TestFunctional/parallel/ImageCommands/Setup 2.56
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.92
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
133 TestFunctional/parallel/ProfileCmd/profile_list 0.5
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.53
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.22
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.51
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.89
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.28
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.98
152 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
153 TestFunctional/parallel/ServiceCmd/List 0.54
154 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
155 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
156 TestFunctional/parallel/ServiceCmd/Format 0.4
157 TestFunctional/parallel/ServiceCmd/URL 0.4
158 TestFunctional/parallel/MountCmd/any-port 10.17
159 TestFunctional/parallel/MountCmd/specific-port 2.61
160 TestFunctional/parallel/MountCmd/VerifyCleanup 2.95
161 TestFunctional/delete_addon-resizer_images 0.08
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestIngressAddonLegacy/StartLegacyK8sCluster 85.64
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.01
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.68
174 TestJSONOutput/start/Command 48.08
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 0.76
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 0.68
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 5.87
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.23
199 TestKicCustomNetwork/create_custom_network 43.25
200 TestKicCustomNetwork/use_default_bridge_network 32.93
201 TestKicExistingNetwork 38.95
202 TestKicCustomSubnet 32.92
203 TestKicStaticIP 36.32
204 TestMainNoArgs 0.08
205 TestMinikubeProfile 72.49
208 TestMountStart/serial/StartWithMountFirst 6.89
209 TestMountStart/serial/VerifyMountFirst 0.29
210 TestMountStart/serial/StartWithMountSecond 6.43
211 TestMountStart/serial/VerifyMountSecond 0.28
212 TestMountStart/serial/DeleteFirst 1.64
213 TestMountStart/serial/VerifyMountPostDelete 0.27
214 TestMountStart/serial/Stop 1.21
215 TestMountStart/serial/RestartStopped 8.21
216 TestMountStart/serial/VerifyMountPostStop 0.28
219 TestMultiNode/serial/FreshStart2Nodes 83.01
220 TestMultiNode/serial/DeployApp2Nodes 4.68
221 TestMultiNode/serial/PingHostFrom2Pods 1.08
222 TestMultiNode/serial/AddNode 23.19
223 TestMultiNode/serial/MultiNodeLabels 0.09
224 TestMultiNode/serial/ProfileList 0.34
225 TestMultiNode/serial/CopyFile 10.92
226 TestMultiNode/serial/StopNode 2.38
227 TestMultiNode/serial/StartAfterStop 12.65
228 TestMultiNode/serial/RestartKeepsNodes 122.39
229 TestMultiNode/serial/DeleteNode 5.15
230 TestMultiNode/serial/StopMultiNode 23.93
231 TestMultiNode/serial/RestartMultiNode 76.29
232 TestMultiNode/serial/ValidateNameConflict 36.07
237 TestPreload 141.79
239 TestScheduledStopUnix 108.36
242 TestInsufficientStorage 13.28
243 TestRunningBinaryUpgrade 108.72
245 TestKubernetesUpgrade 386.22
246 TestMissingContainerUpgrade 145.29
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
249 TestNoKubernetes/serial/StartWithK8s 40.79
250 TestNoKubernetes/serial/StartWithStopK8s 29.28
251 TestNoKubernetes/serial/Start 6.06
252 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
253 TestNoKubernetes/serial/ProfileList 5.39
254 TestNoKubernetes/serial/Stop 1.23
255 TestNoKubernetes/serial/StartNoArgs 6.93
256 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
257 TestStoppedBinaryUpgrade/Setup 1.18
258 TestStoppedBinaryUpgrade/Upgrade 77.3
259 TestStoppedBinaryUpgrade/MinikubeLogs 1.24
268 TestPause/serial/Start 58.71
277 TestNetworkPlugins/group/false 6.22
282 TestStartStop/group/old-k8s-version/serial/FirstStart 120.37
283 TestStartStop/group/old-k8s-version/serial/DeployApp 9.53
284 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.18
285 TestStartStop/group/old-k8s-version/serial/Stop 11.97
286 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
287 TestStartStop/group/old-k8s-version/serial/SecondStart 440.67
289 TestStartStop/group/no-preload/serial/FirstStart 64.13
290 TestStartStop/group/no-preload/serial/DeployApp 8.35
291 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
292 TestStartStop/group/no-preload/serial/Stop 12.03
293 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
294 TestStartStop/group/no-preload/serial/SecondStart 617.87
295 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
296 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
297 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
298 TestStartStop/group/old-k8s-version/serial/Pause 3.55
300 TestStartStop/group/embed-certs/serial/FirstStart 55.52
301 TestStartStop/group/embed-certs/serial/DeployApp 8.33
302 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
303 TestStartStop/group/embed-certs/serial/Stop 12
304 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
305 TestStartStop/group/embed-certs/serial/SecondStart 348.63
306 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
307 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
308 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
309 TestStartStop/group/no-preload/serial/Pause 3.31
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 49.67
312 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.36
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.23
314 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.99
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 611.73
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
318 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
319 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.33
320 TestStartStop/group/embed-certs/serial/Pause 3.41
322 TestStartStop/group/newest-cni/serial/FirstStart 43.8
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.33
325 TestStartStop/group/newest-cni/serial/Stop 1.92
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
327 TestStartStop/group/newest-cni/serial/SecondStart 32.93
328 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
331 TestStartStop/group/newest-cni/serial/Pause 2.95
332 TestNetworkPlugins/group/auto/Start 49.28
333 TestNetworkPlugins/group/auto/KubeletFlags 0.3
334 TestNetworkPlugins/group/auto/NetCatPod 11.31
335 TestNetworkPlugins/group/auto/DNS 0.19
336 TestNetworkPlugins/group/auto/Localhost 0.15
337 TestNetworkPlugins/group/auto/HairPin 0.16
338 TestNetworkPlugins/group/kindnet/Start 52.2
339 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
340 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
341 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
342 TestNetworkPlugins/group/kindnet/DNS 0.24
343 TestNetworkPlugins/group/kindnet/Localhost 0.24
344 TestNetworkPlugins/group/kindnet/HairPin 0.22
345 TestNetworkPlugins/group/calico/Start 74.11
346 TestNetworkPlugins/group/calico/ControllerPod 6.01
347 TestNetworkPlugins/group/calico/KubeletFlags 0.35
348 TestNetworkPlugins/group/calico/NetCatPod 10.28
349 TestNetworkPlugins/group/calico/DNS 0.2
350 TestNetworkPlugins/group/calico/Localhost 0.18
351 TestNetworkPlugins/group/calico/HairPin 0.18
352 TestNetworkPlugins/group/custom-flannel/Start 64.62
353 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
354 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.3
355 TestNetworkPlugins/group/custom-flannel/DNS 0.18
356 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
357 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
358 TestNetworkPlugins/group/enable-default-cni/Start 45.38
359 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
360 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.29
361 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
362 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
363 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
364 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
365 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.12
366 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
367 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.71
368 TestNetworkPlugins/group/flannel/Start 74.9
369 TestNetworkPlugins/group/bridge/Start 97.81
370 TestNetworkPlugins/group/flannel/ControllerPod 6.01
371 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
372 TestNetworkPlugins/group/flannel/NetCatPod 10.3
373 TestNetworkPlugins/group/flannel/DNS 0.18
374 TestNetworkPlugins/group/flannel/Localhost 0.18
375 TestNetworkPlugins/group/flannel/HairPin 0.18
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.43
377 TestNetworkPlugins/group/bridge/NetCatPod 12.39
378 TestNetworkPlugins/group/bridge/DNS 0.26
379 TestNetworkPlugins/group/bridge/Localhost 0.23
380 TestNetworkPlugins/group/bridge/HairPin 0.29
x
+
TestDownloadOnly/v1.16.0/json-events (10.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-744997 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-744997 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.051491341s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-744997
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-744997: exit status 85 (381.477779ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-744997 | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC |          |
	|         | -p download-only-744997        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 11:44:14
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 11:44:14.294286  613993 out.go:291] Setting OutFile to fd 1 ...
	I0226 11:44:14.294423  613993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:44:14.294435  613993 out.go:304] Setting ErrFile to fd 2...
	I0226 11:44:14.294441  613993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:44:14.294722  613993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
	W0226 11:44:14.294854  613993 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18222-608626/.minikube/config/config.json: open /home/jenkins/minikube-integration/18222-608626/.minikube/config/config.json: no such file or directory
	I0226 11:44:14.295314  613993 out.go:298] Setting JSON to true
	I0226 11:44:14.296176  613993 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":88001,"bootTime":1708859854,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0226 11:44:14.296249  613993 start.go:139] virtualization:  
	I0226 11:44:14.299382  613993 out.go:97] [download-only-744997] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0226 11:44:14.301209  613993 out.go:169] MINIKUBE_LOCATION=18222
	W0226 11:44:14.299540  613993 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball: no such file or directory
	I0226 11:44:14.299603  613993 notify.go:220] Checking for updates...
	I0226 11:44:14.303526  613993 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 11:44:14.305546  613993 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 11:44:14.307571  613993 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	I0226 11:44:14.310262  613993 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0226 11:44:14.313627  613993 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0226 11:44:14.313931  613993 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 11:44:14.335361  613993 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0226 11:44:14.335469  613993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:44:14.409460  613993 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-26 11:44:14.399358118 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 11:44:14.409568  613993 docker.go:295] overlay module found
	I0226 11:44:14.411760  613993 out.go:97] Using the docker driver based on user configuration
	I0226 11:44:14.411786  613993 start.go:299] selected driver: docker
	I0226 11:44:14.411792  613993 start.go:903] validating driver "docker" against <nil>
	I0226 11:44:14.411904  613993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:44:14.469490  613993 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-26 11:44:14.460618293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 11:44:14.469656  613993 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 11:44:14.469951  613993 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0226 11:44:14.470110  613993 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0226 11:44:14.472246  613993 out.go:169] Using Docker driver with root privileges
	I0226 11:44:14.473939  613993 cni.go:84] Creating CNI manager for ""
	I0226 11:44:14.473956  613993 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 11:44:14.473980  613993 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0226 11:44:14.473994  613993 start_flags.go:323] config:
	{Name:download-only-744997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-744997 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:44:14.475901  613993 out.go:97] Starting control plane node download-only-744997 in cluster download-only-744997
	I0226 11:44:14.475929  613993 cache.go:121] Beginning downloading kic base image for docker with crio
	I0226 11:44:14.477671  613993 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0226 11:44:14.477697  613993 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0226 11:44:14.477849  613993 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 11:44:14.492521  613993 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0226 11:44:14.492726  613993 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0226 11:44:14.492836  613993 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0226 11:44:14.546001  613993 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0226 11:44:14.546032  613993 cache.go:56] Caching tarball of preloaded images
	I0226 11:44:14.546207  613993 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0226 11:44:14.548491  613993 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0226 11:44:14.548533  613993 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0226 11:44:14.655192  613993 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0226 11:44:19.261359  613993 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-744997"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-744997
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (9.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-333231 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-333231 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.359739397s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (9.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-333231
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-333231: exit status 85 (85.129653ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-744997 | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC |                     |
	|         | -p download-only-744997        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC | 26 Feb 24 11:44 UTC |
	| delete  | -p download-only-744997        | download-only-744997 | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC | 26 Feb 24 11:44 UTC |
	| start   | -o=json --download-only        | download-only-333231 | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC |                     |
	|         | -p download-only-333231        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 11:44:25
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 11:44:25.392043  614159 out.go:291] Setting OutFile to fd 1 ...
	I0226 11:44:25.392222  614159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:44:25.392233  614159 out.go:304] Setting ErrFile to fd 2...
	I0226 11:44:25.392239  614159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:44:25.392510  614159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
	I0226 11:44:25.393001  614159 out.go:298] Setting JSON to true
	I0226 11:44:25.393826  614159 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":88012,"bootTime":1708859854,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0226 11:44:25.393902  614159 start.go:139] virtualization:  
	I0226 11:44:25.442481  614159 out.go:97] [download-only-333231] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0226 11:44:25.475905  614159 out.go:169] MINIKUBE_LOCATION=18222
	I0226 11:44:25.442705  614159 notify.go:220] Checking for updates...
	I0226 11:44:25.557704  614159 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 11:44:25.589271  614159 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 11:44:25.618643  614159 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	I0226 11:44:25.620871  614159 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0226 11:44:25.624664  614159 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0226 11:44:25.625052  614159 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 11:44:25.646791  614159 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0226 11:44:25.646907  614159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:44:25.704585  614159 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-26 11:44:25.695012832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 11:44:25.704813  614159 docker.go:295] overlay module found
	I0226 11:44:25.706722  614159 out.go:97] Using the docker driver based on user configuration
	I0226 11:44:25.706750  614159 start.go:299] selected driver: docker
	I0226 11:44:25.706757  614159 start.go:903] validating driver "docker" against <nil>
	I0226 11:44:25.706867  614159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:44:25.761995  614159 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-26 11:44:25.753119949 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 11:44:25.762168  614159 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 11:44:25.762446  614159 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0226 11:44:25.762611  614159 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0226 11:44:25.765021  614159 out.go:169] Using Docker driver with root privileges
	I0226 11:44:25.766807  614159 cni.go:84] Creating CNI manager for ""
	I0226 11:44:25.766828  614159 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 11:44:25.766838  614159 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0226 11:44:25.766849  614159 start_flags.go:323] config:
	{Name:download-only-333231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-333231 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:44:25.768916  614159 out.go:97] Starting control plane node download-only-333231 in cluster download-only-333231
	I0226 11:44:25.768939  614159 cache.go:121] Beginning downloading kic base image for docker with crio
	I0226 11:44:25.770699  614159 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0226 11:44:25.770727  614159 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0226 11:44:25.770884  614159 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 11:44:25.785503  614159 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0226 11:44:25.785622  614159 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0226 11:44:25.785646  614159 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory, skipping pull
	I0226 11:44:25.785651  614159 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in cache, skipping pull
	I0226 11:44:25.785660  614159 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	I0226 11:44:25.835305  614159 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0226 11:44:25.835326  614159 cache.go:56] Caching tarball of preloaded images
	I0226 11:44:25.835480  614159 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0226 11:44:25.837662  614159 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0226 11:44:25.837691  614159 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0226 11:44:25.940437  614159 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-333231"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-333231
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (8.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-735131 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-735131 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.912848353s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (8.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-735131
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-735131: exit status 85 (293.149621ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-744997 | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC |                     |
	|         | -p download-only-744997           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC | 26 Feb 24 11:44 UTC |
	| delete  | -p download-only-744997           | download-only-744997 | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC | 26 Feb 24 11:44 UTC |
	| start   | -o=json --download-only           | download-only-333231 | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC |                     |
	|         | -p download-only-333231           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC | 26 Feb 24 11:44 UTC |
	| delete  | -p download-only-333231           | download-only-333231 | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC | 26 Feb 24 11:44 UTC |
	| start   | -o=json --download-only           | download-only-735131 | jenkins | v1.32.0 | 26 Feb 24 11:44 UTC |                     |
	|         | -p download-only-735131           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/26 11:44:35
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0226 11:44:35.212165  614322 out.go:291] Setting OutFile to fd 1 ...
	I0226 11:44:35.212393  614322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:44:35.212426  614322 out.go:304] Setting ErrFile to fd 2...
	I0226 11:44:35.212446  614322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:44:35.212774  614322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
	I0226 11:44:35.213251  614322 out.go:298] Setting JSON to true
	I0226 11:44:35.214176  614322 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":88022,"bootTime":1708859854,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0226 11:44:35.214275  614322 start.go:139] virtualization:  
	I0226 11:44:35.217181  614322 out.go:97] [download-only-735131] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0226 11:44:35.219470  614322 out.go:169] MINIKUBE_LOCATION=18222
	I0226 11:44:35.217476  614322 notify.go:220] Checking for updates...
	I0226 11:44:35.223590  614322 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 11:44:35.225713  614322 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 11:44:35.227783  614322 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	I0226 11:44:35.229609  614322 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0226 11:44:35.233035  614322 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0226 11:44:35.233333  614322 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 11:44:35.254427  614322 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0226 11:44:35.254529  614322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:44:35.327053  614322 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-26 11:44:35.31788462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 11:44:35.327166  614322 docker.go:295] overlay module found
	I0226 11:44:35.329421  614322 out.go:97] Using the docker driver based on user configuration
	I0226 11:44:35.329463  614322 start.go:299] selected driver: docker
	I0226 11:44:35.329475  614322 start.go:903] validating driver "docker" against <nil>
	I0226 11:44:35.329583  614322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:44:35.383995  614322 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-26 11:44:35.37459298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 11:44:35.384178  614322 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0226 11:44:35.384469  614322 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0226 11:44:35.384636  614322 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0226 11:44:35.386942  614322 out.go:169] Using Docker driver with root privileges
	I0226 11:44:35.388900  614322 cni.go:84] Creating CNI manager for ""
	I0226 11:44:35.388921  614322 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0226 11:44:35.388931  614322 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0226 11:44:35.388946  614322 start_flags.go:323] config:
	{Name:download-only-735131 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-735131 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:44:35.390909  614322 out.go:97] Starting control plane node download-only-735131 in cluster download-only-735131
	I0226 11:44:35.390928  614322 cache.go:121] Beginning downloading kic base image for docker with crio
	I0226 11:44:35.392868  614322 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0226 11:44:35.392894  614322 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0226 11:44:35.393061  614322 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0226 11:44:35.407816  614322 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0226 11:44:35.407937  614322 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0226 11:44:35.407959  614322 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory, skipping pull
	I0226 11:44:35.407966  614322 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in cache, skipping pull
	I0226 11:44:35.407974  614322 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	I0226 11:44:35.452831  614322 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0226 11:44:35.452864  614322 cache.go:56] Caching tarball of preloaded images
	I0226 11:44:35.453027  614322 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0226 11:44:35.455087  614322 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0226 11:44:35.455129  614322 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0226 11:44:35.552317  614322 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:9d8119c6fd5c58f71de57a6fdbe27bf3 -> /home/jenkins/minikube-integration/18222-608626/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-735131"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-735131
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-467666 --alsologtostderr --binary-mirror http://127.0.0.1:45723 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-467666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-467666
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-006797
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-006797: exit status 85 (77.891675ms)

                                                
                                                
-- stdout --
	* Profile "addons-006797" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-006797"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-006797
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-006797: exit status 85 (79.944822ms)

                                                
                                                
-- stdout --
	* Profile "addons-006797" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-006797"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (166.25s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-006797 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-006797 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m46.245555442s)
--- PASS: TestAddons/Setup (166.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 53.749505ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-fcrhw" [b5614868-d4d9-4a9d-b64e-b828d191ec44] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004779849s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qwsks" [f04cec61-ef11-4852-919b-e0bc55dd9118] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004949906s
addons_test.go:340: (dbg) Run:  kubectl --context addons-006797 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-006797 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-006797 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.637258871s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-006797 ip
2024/02/26 11:47:48 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-006797 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.78s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-d2wv9" [6b62903d-46e7-4589-99bc-efcc9e729510] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004455695s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-006797
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-006797: (5.87925018s)
--- PASS: TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.89s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 8.120919ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-4r7kf" [67e2b4c0-08e4-47f1-8f49-27fe813e4211] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004760296s
addons_test.go:415: (dbg) Run:  kubectl --context addons-006797 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-006797 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.89s)

                                                
                                    
x
+
TestAddons/parallel/CSI (67.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 53.918567ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-006797 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-006797 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7cdc7590-7ef4-4880-bb64-79cf98dd7f8a] Pending
helpers_test.go:344: "task-pv-pod" [7cdc7590-7ef4-4880-bb64-79cf98dd7f8a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7cdc7590-7ef4-4880-bb64-79cf98dd7f8a] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004306815s
addons_test.go:584: (dbg) Run:  kubectl --context addons-006797 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-006797 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-006797 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-006797 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-006797 delete pod task-pv-pod: (1.177246281s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-006797 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-006797 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-006797 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7d3a7818-a831-451a-a12f-3613ec42b36a] Pending
helpers_test.go:344: "task-pv-pod-restore" [7d3a7818-a831-451a-a12f-3613ec42b36a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7d3a7818-a831-451a-a12f-3613ec42b36a] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004323684s
addons_test.go:626: (dbg) Run:  kubectl --context addons-006797 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-006797 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-006797 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-006797 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-006797 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.81557873s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-006797 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (67.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-006797 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-006797 --alsologtostderr -v=1: (1.489622458s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-rf6wk" [cf0abaf0-e419-49f4-a7c4-c315ad41066e] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-rf6wk" [cf0abaf0-e419-49f4-a7c4-c315ad41066e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-rf6wk" [cf0abaf0-e419-49f4-a7c4-c315ad41066e] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003785707s
--- PASS: TestAddons/parallel/Headlamp (11.49s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-rx92g" [e28a7e81-061a-43bf-b75c-9758aef9f297] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003397046s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-006797
--- PASS: TestAddons/parallel/CloudSpanner (6.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.32s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-006797 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-006797 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-006797 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [48a5775f-48a2-4b9c-b058-dc5c010f5842] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [48a5775f-48a2-4b9c-b058-dc5c010f5842] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [48a5775f-48a2-4b9c-b058-dc5c010f5842] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003531666s
addons_test.go:891: (dbg) Run:  kubectl --context addons-006797 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-006797 ssh "cat /opt/local-path-provisioner/pvc-fdf276d8-1831-47fb-9d00-f09a775c769f_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-006797 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-006797 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-006797 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-006797 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.203239226s)
--- PASS: TestAddons/parallel/LocalPath (53.32s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-z98fk" [30c9a229-be92-41ee-a430-6170767b3979] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004734147s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-006797
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-hs94l" [393b211f-51f5-4097-b97f-d8c18b3eea42] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006911726s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-006797 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-006797 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-006797
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-006797: (11.969405161s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-006797
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-006797
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-006797
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestCertOptions (37.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-437335 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-437335 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.572830701s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-437335 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-437335 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-437335 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-437335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-437335
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-437335: (1.974395567s)
--- PASS: TestCertOptions (37.22s)

                                                
                                    
x
+
TestCertExpiration (245.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-864845 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-864845 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.211420464s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-864845 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-864845 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (22.816228411s)
helpers_test.go:175: Cleaning up "cert-expiration-864845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-864845
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-864845: (2.675671392s)
--- PASS: TestCertExpiration (245.70s)

                                                
                                    
x
+
TestForceSystemdFlag (39.81s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-700637 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0226 12:25:36.215039  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-700637 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.755269063s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-700637 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-700637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-700637
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-700637: (2.641653751s)
--- PASS: TestForceSystemdFlag (39.81s)

                                                
                                    
x
+
TestForceSystemdEnv (41.91s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-763409 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-763409 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.2182751s)
helpers_test.go:175: Cleaning up "force-systemd-env-763409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-763409
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-763409: (2.693924577s)
--- PASS: TestForceSystemdEnv (41.91s)

                                                
                                    
x
+
TestErrorSpam/setup (31.35s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-457523 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-457523 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-457523 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-457523 --driver=docker  --container-runtime=crio: (31.346961664s)
--- PASS: TestErrorSpam/setup (31.35s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 stop: (1.272805426s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457523 --log_dir /tmp/nospam-457523 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18222-608626/.minikube/files/etc/test/nested/copy/613988/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-395953 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0226 11:52:33.168615  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 11:52:33.175503  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 11:52:33.185774  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 11:52:33.206042  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 11:52:33.246346  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 11:52:33.326665  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 11:52:33.487039  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 11:52:33.807539  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 11:52:34.447795  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 11:52:35.728251  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 11:52:38.289930  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 11:52:43.410509  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 11:52:53.651148  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-395953 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (48.725216524s)
--- PASS: TestFunctional/serial/StartWithProxy (48.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-395953 --alsologtostderr -v=8
E0226 11:53:14.131426  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-395953 --alsologtostderr -v=8: (40.311724566s)
functional_test.go:659: soft start took 40.323247169s for "functional-395953" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-395953 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-395953 cache add registry.k8s.io/pause:3.1: (1.232828624s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-395953 cache add registry.k8s.io/pause:3.3: (1.232881368s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-395953 cache add registry.k8s.io/pause:latest: (1.067608694s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-395953 /tmp/TestFunctionalserialCacheCmdcacheadd_local1527379038/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 cache add minikube-local-cache-test:functional-395953
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 cache delete minikube-local-cache-test:functional-395953
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-395953
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-395953 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (331.930451ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 kubectl -- --context functional-395953 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-395953 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.35s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-395953 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0226 11:53:55.092612  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-395953 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.345560578s)
functional_test.go:757: restart took 34.345671115s for "functional-395953" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.35s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-395953 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-395953 logs: (1.698570294s)
--- PASS: TestFunctional/serial/LogsCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 logs --file /tmp/TestFunctionalserialLogsFileCmd4013956396/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-395953 logs --file /tmp/TestFunctionalserialLogsFileCmd4013956396/001/logs.txt: (1.72638693s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-395953 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-395953
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-395953: exit status 115 (568.082202ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30777 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-395953 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-395953 config get cpus: exit status 14 (117.444239ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-395953 config get cpus: exit status 14 (101.328425ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-395953 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-395953 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 640358: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.28s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-395953 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-395953 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (219.798278ms)

                                                
                                                
-- stdout --
	* [functional-395953] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 11:55:15.982443  639609 out.go:291] Setting OutFile to fd 1 ...
	I0226 11:55:15.982601  639609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:55:15.982627  639609 out.go:304] Setting ErrFile to fd 2...
	I0226 11:55:15.982645  639609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:55:15.982972  639609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
	I0226 11:55:15.983410  639609 out.go:298] Setting JSON to false
	I0226 11:55:15.984459  639609 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":88662,"bootTime":1708859854,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0226 11:55:15.984545  639609 start.go:139] virtualization:  
	I0226 11:55:15.987277  639609 out.go:177] * [functional-395953] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0226 11:55:15.989490  639609 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 11:55:15.991314  639609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 11:55:15.989616  639609 notify.go:220] Checking for updates...
	I0226 11:55:15.995357  639609 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 11:55:15.997256  639609 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	I0226 11:55:15.999378  639609 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0226 11:55:16.005538  639609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 11:55:16.008213  639609 config.go:182] Loaded profile config "functional-395953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 11:55:16.008931  639609 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 11:55:16.034382  639609 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0226 11:55:16.034512  639609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:55:16.118384  639609 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-26 11:55:16.108857917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 11:55:16.118497  639609 docker.go:295] overlay module found
	I0226 11:55:16.120829  639609 out.go:177] * Using the docker driver based on existing profile
	I0226 11:55:16.122558  639609 start.go:299] selected driver: docker
	I0226 11:55:16.122575  639609 start.go:903] validating driver "docker" against &{Name:functional-395953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-395953 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:55:16.122700  639609 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 11:55:16.125404  639609 out.go:177] 
	W0226 11:55:16.127178  639609 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0226 11:55:16.129464  639609 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-395953 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-395953 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-395953 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (199.262501ms)

                                                
                                                
-- stdout --
	* [functional-395953] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 11:55:19.381544  640205 out.go:291] Setting OutFile to fd 1 ...
	I0226 11:55:19.381738  640205 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:55:19.381751  640205 out.go:304] Setting ErrFile to fd 2...
	I0226 11:55:19.381757  640205 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 11:55:19.382186  640205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
	I0226 11:55:19.382610  640205 out.go:298] Setting JSON to false
	I0226 11:55:19.384447  640205 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":88666,"bootTime":1708859854,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0226 11:55:19.384523  640205 start.go:139] virtualization:  
	I0226 11:55:19.386966  640205 out.go:177] * [functional-395953] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0226 11:55:19.388851  640205 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 11:55:19.390602  640205 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 11:55:19.388983  640205 notify.go:220] Checking for updates...
	I0226 11:55:19.393225  640205 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 11:55:19.395144  640205 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	I0226 11:55:19.397024  640205 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0226 11:55:19.398964  640205 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 11:55:19.401159  640205 config.go:182] Loaded profile config "functional-395953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 11:55:19.401733  640205 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 11:55:19.422922  640205 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0226 11:55:19.423048  640205 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 11:55:19.498185  640205 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-26 11:55:19.488999778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 11:55:19.498307  640205 docker.go:295] overlay module found
	I0226 11:55:19.500613  640205 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0226 11:55:19.502215  640205 start.go:299] selected driver: docker
	I0226 11:55:19.502235  640205 start.go:903] validating driver "docker" against &{Name:functional-395953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-395953 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0226 11:55:19.502353  640205 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 11:55:19.504852  640205 out.go:177] 
	W0226 11:55:19.506687  640205 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0226 11:55:19.508545  640205 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-395953 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-395953 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-tm9g4" [a692f644-6b08-45a2-a67f-bb48d01c612d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-tm9g4" [a692f644-6b08-45a2-a67f-bb48d01c612d] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003854654s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31883
functional_test.go:1671: http://192.168.49.2:31883: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-tm9g4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31883
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2d52e1f5-d232-4257-82c9-3dec917ad67b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004254633s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-395953 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-395953 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-395953 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-395953 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-395953 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [468cb63a-3457-49f5-9ddb-41f6d022c7f7] Pending
helpers_test.go:344: "sp-pod" [468cb63a-3457-49f5-9ddb-41f6d022c7f7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [468cb63a-3457-49f5-9ddb-41f6d022c7f7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.0046384s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-395953 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-395953 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-395953 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ff6bad8e-e26e-490a-8215-58e3596fa867] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ff6bad8e-e26e-490a-8215-58e3596fa867] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.006032422s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-395953 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.90s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh -n functional-395953 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 cp functional-395953:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3972624565/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh -n functional-395953 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh -n functional-395953 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/613988/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "sudo cat /etc/test/nested/copy/613988/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/613988.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "sudo cat /etc/ssl/certs/613988.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/613988.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "sudo cat /usr/share/ca-certificates/613988.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/6139882.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "sudo cat /etc/ssl/certs/6139882.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/6139882.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "sudo cat /usr/share/ca-certificates/6139882.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-395953 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-395953 ssh "sudo systemctl is-active docker": exit status 1 (388.412022ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-395953 ssh "sudo systemctl is-active containerd": exit status 1 (394.442811ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 version -o=json --components
2024/02/26 11:55:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-395953 version -o=json --components: (1.182914118s)
--- PASS: TestFunctional/parallel/Version/components (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-395953 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-395953
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240202-8f1494ea
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-395953 image ls --format short --alsologtostderr:
I0226 11:55:33.431103  641594 out.go:291] Setting OutFile to fd 1 ...
I0226 11:55:33.431400  641594 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 11:55:33.431431  641594 out.go:304] Setting ErrFile to fd 2...
I0226 11:55:33.431452  641594 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 11:55:33.431865  641594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
I0226 11:55:33.432759  641594 config.go:182] Loaded profile config "functional-395953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0226 11:55:33.432941  641594 config.go:182] Loaded profile config "functional-395953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0226 11:55:33.433571  641594 cli_runner.go:164] Run: docker container inspect functional-395953 --format={{.State.Status}}
I0226 11:55:33.471029  641594 ssh_runner.go:195] Run: systemctl --version
I0226 11:55:33.471097  641594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-395953
I0226 11:55:33.498127  641594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36811 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/functional-395953/id_rsa Username:docker}
I0226 11:55:33.598233  641594 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-395953 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/google-containers/addon-resizer  | functional-395953  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| docker.io/library/nginx                 | latest             | 760b7cbba31e1 | 196MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/library/nginx                 | alpine             | be5e6f23a9904 | 45.4MB |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4740c1948d3fc | 60.9MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-395953 image ls --format table --alsologtostderr:
I0226 11:55:33.759406  641649 out.go:291] Setting OutFile to fd 1 ...
I0226 11:55:33.759607  641649 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 11:55:33.759621  641649 out.go:304] Setting ErrFile to fd 2...
I0226 11:55:33.759628  641649 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 11:55:33.759918  641649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
I0226 11:55:33.760606  641649 config.go:182] Loaded profile config "functional-395953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0226 11:55:33.760817  641649 config.go:182] Loaded profile config "functional-395953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0226 11:55:33.761345  641649 cli_runner.go:164] Run: docker container inspect functional-395953 --format={{.State.Status}}
I0226 11:55:33.782059  641649 ssh_runner.go:195] Run: systemctl --version
I0226 11:55:33.782125  641649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-395953
I0226 11:55:33.799274  641649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36811 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/functional-395953/id_rsa Username:docker}
I0226 11:55:33.897333  641649 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-395953 image ls --format json --alsologtostderr:
[{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f2451884467
6","repoDigests":["docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107","docker.io/library/nginx@sha256:d5ec359034df4b326b8b5f0efa26dbd8742d166161b7edb37321b795c8fe5f48"],"repoTags":["docker.io/library/nginx:latest"],"size":"196117996"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"
size":"29037500"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"},{"id":"be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests":["docker.io/library/nginx@sha256:34aa0a372d3220dc0448131f809c72d8085f79bdec8058ad6970fc034a395674","docker.io/library/nginx@sha256:6a2f8b28e45c4ad
ea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45393258"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-395953"],"size":"34114467"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},
{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"59253556"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fb
fd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"60940831"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/c
oredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-395953 image ls --format json --alsologtostderr:
I0226 11:55:33.468158  641598 out.go:291] Setting OutFile to fd 1 ...
I0226 11:55:33.468315  641598 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 11:55:33.468326  641598 out.go:304] Setting ErrFile to fd 2...
I0226 11:55:33.468332  641598 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 11:55:33.468709  641598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
I0226 11:55:33.469365  641598 config.go:182] Loaded profile config "functional-395953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0226 11:55:33.469495  641598 config.go:182] Loaded profile config "functional-395953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0226 11:55:33.469975  641598 cli_runner.go:164] Run: docker container inspect functional-395953 --format={{.State.Status}}
I0226 11:55:33.488149  641598 ssh_runner.go:195] Run: systemctl --version
I0226 11:55:33.488209  641598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-395953
I0226 11:55:33.511695  641598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36811 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/functional-395953/id_rsa Username:docker}
I0226 11:55:33.613801  641598 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-395953 image ls --format yaml --alsologtostderr:
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-395953
size: "34114467"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676
repoDigests:
- docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107
- docker.io/library/nginx@sha256:d5ec359034df4b326b8b5f0efa26dbd8742d166161b7edb37321b795c8fe5f48
repoTags:
- docker.io/library/nginx:latest
size: "196117996"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests:
- docker.io/library/nginx@sha256:34aa0a372d3220dc0448131f809c72d8085f79bdec8058ad6970fc034a395674
- docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9
repoTags:
- docker.io/library/nginx:alpine
size: "45393258"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "60940831"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-395953 image ls --format yaml --alsologtostderr:
I0226 11:55:33.777071  641653 out.go:291] Setting OutFile to fd 1 ...
I0226 11:55:33.777306  641653 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 11:55:33.777340  641653 out.go:304] Setting ErrFile to fd 2...
I0226 11:55:33.777367  641653 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 11:55:33.777654  641653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
I0226 11:55:33.779032  641653 config.go:182] Loaded profile config "functional-395953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0226 11:55:33.779215  641653 config.go:182] Loaded profile config "functional-395953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0226 11:55:33.779862  641653 cli_runner.go:164] Run: docker container inspect functional-395953 --format={{.State.Status}}
I0226 11:55:33.800497  641653 ssh_runner.go:195] Run: systemctl --version
I0226 11:55:33.800545  641653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-395953
I0226 11:55:33.830102  641653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36811 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/functional-395953/id_rsa Username:docker}
I0226 11:55:33.930320  641653 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-395953 ssh pgrep buildkitd: exit status 1 (308.516572ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image build -t localhost/my-image:functional-395953 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-395953 image build -t localhost/my-image:functional-395953 testdata/build --alsologtostderr: (2.041279615s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-395953 image build -t localhost/my-image:functional-395953 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> cfb09810ccd
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-395953
--> cc5321b31e7
Successfully tagged localhost/my-image:functional-395953
cc5321b31e726ef618b8a32ce6c94b08b1d7c8f6dba5347903ba4d89cd1ee89b
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-395953 image build -t localhost/my-image:functional-395953 testdata/build --alsologtostderr:
I0226 11:55:34.336879  641754 out.go:291] Setting OutFile to fd 1 ...
I0226 11:55:34.337847  641754 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 11:55:34.337860  641754 out.go:304] Setting ErrFile to fd 2...
I0226 11:55:34.337866  641754 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 11:55:34.338138  641754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
I0226 11:55:34.338790  641754 config.go:182] Loaded profile config "functional-395953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0226 11:55:34.340792  641754 config.go:182] Loaded profile config "functional-395953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0226 11:55:34.341460  641754 cli_runner.go:164] Run: docker container inspect functional-395953 --format={{.State.Status}}
I0226 11:55:34.357543  641754 ssh_runner.go:195] Run: systemctl --version
I0226 11:55:34.357593  641754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-395953
I0226 11:55:34.376291  641754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36811 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/functional-395953/id_rsa Username:docker}
I0226 11:55:34.473591  641754 build_images.go:151] Building image from path: /tmp/build.1201943915.tar
I0226 11:55:34.473669  641754 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0226 11:55:34.482446  641754 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1201943915.tar
I0226 11:55:34.486158  641754 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1201943915.tar: stat -c "%s %y" /var/lib/minikube/build/build.1201943915.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1201943915.tar': No such file or directory
I0226 11:55:34.486191  641754 ssh_runner.go:362] scp /tmp/build.1201943915.tar --> /var/lib/minikube/build/build.1201943915.tar (3072 bytes)
I0226 11:55:34.512962  641754 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1201943915
I0226 11:55:34.522322  641754 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1201943915 -xf /var/lib/minikube/build/build.1201943915.tar
I0226 11:55:34.532113  641754 crio.go:297] Building image: /var/lib/minikube/build/build.1201943915
I0226 11:55:34.532229  641754 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-395953 /var/lib/minikube/build/build.1201943915 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0226 11:55:36.283886  641754 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-395953 /var/lib/minikube/build/build.1201943915 --cgroup-manager=cgroupfs: (1.751626556s)
I0226 11:55:36.283984  641754 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1201943915
I0226 11:55:36.293350  641754 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1201943915.tar
I0226 11:55:36.302368  641754 build_images.go:207] Built localhost/my-image:functional-395953 from /tmp/build.1201943915.tar
I0226 11:55:36.302409  641754 build_images.go:123] succeeded building to: functional-395953
I0226 11:55:36.302414  641754 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.503593248s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-395953
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image load --daemon gcr.io/google-containers/addon-resizer:functional-395953 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-395953 image load --daemon gcr.io/google-containers/addon-resizer:functional-395953 --alsologtostderr: (4.668522244s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.92s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "425.860376ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "72.071854ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "411.948784ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "90.457077ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-395953 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-395953 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-395953 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 637830: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-395953 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-395953 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-395953 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4904677a-ac20-467d-a579-c12cd9d8e8e8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4904677a-ac20-467d-a579-c12cd9d8e8e8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004553722s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image load --daemon gcr.io/google-containers/addon-resizer:functional-395953 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-395953 image load --daemon gcr.io/google-containers/addon-resizer:functional-395953 --alsologtostderr: (2.989711929s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.55940556s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-395953
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image load --daemon gcr.io/google-containers/addon-resizer:functional-395953 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-395953 image load --daemon gcr.io/google-containers/addon-resizer:functional-395953 --alsologtostderr: (3.68929772s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image save gcr.io/google-containers/addon-resizer:functional-395953 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-395953 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.74.45 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-395953 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image rm gcr.io/google-containers/addon-resizer:functional-395953 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-395953 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.028159532s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-395953
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 image save --daemon gcr.io/google-containers/addon-resizer:functional-395953 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-395953
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-395953 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-395953 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-l5xkt" [5671f77d-385d-4cac-bbe7-645a5427bdab] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-l5xkt" [5671f77d-385d-4cac-bbe7-645a5427bdab] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004770625s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 service list -o json
functional_test.go:1490: Took "518.907365ms" to run "out/minikube-linux-arm64 -p functional-395953 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:32180
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:32180
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-395953 /tmp/TestFunctionalparallelMountCmdany-port404927962/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1708948516391140384" to /tmp/TestFunctionalparallelMountCmdany-port404927962/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1708948516391140384" to /tmp/TestFunctionalparallelMountCmdany-port404927962/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1708948516391140384" to /tmp/TestFunctionalparallelMountCmdany-port404927962/001/test-1708948516391140384
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-395953 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (333.977559ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0226 11:55:17.013380  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 26 11:55 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 26 11:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 26 11:55 test-1708948516391140384
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh cat /mount-9p/test-1708948516391140384
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-395953 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e0b2845b-1359-4338-9056-d68189b0562e] Pending
helpers_test.go:344: "busybox-mount" [e0b2845b-1359-4338-9056-d68189b0562e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e0b2845b-1359-4338-9056-d68189b0562e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e0b2845b-1359-4338-9056-d68189b0562e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.0047264s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-395953 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-395953 /tmp/TestFunctionalparallelMountCmdany-port404927962/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-395953 /tmp/TestFunctionalparallelMountCmdspecific-port1808548877/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-395953 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (672.553377ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-395953 /tmp/TestFunctionalparallelMountCmdspecific-port1808548877/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-395953 ssh "sudo umount -f /mount-9p": exit status 1 (431.260529ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-395953 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-395953 /tmp/TestFunctionalparallelMountCmdspecific-port1808548877/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-395953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1179451438/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-395953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1179451438/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-395953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1179451438/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-395953 ssh "findmnt -T" /mount1: exit status 1 (1.424874097s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-395953 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-395953 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-395953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1179451438/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-395953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1179451438/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-395953 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1179451438/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.95s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-395953
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-395953
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-395953
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (85.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-329029 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-329029 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m25.636900218s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (85.64s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.01s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-329029 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-329029 addons enable ingress --alsologtostderr -v=5: (12.006152881s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.68s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-329029 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.08s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-068811 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0226 12:00:22.577873  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 12:01:03.538095  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-068811 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (48.074827483s)
--- PASS: TestJSONOutput/start/Command (48.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-068811 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-068811 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-068811 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-068811 --output=json --user=testUser: (5.868427571s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-176020 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-176020 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.169836ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b4cdd03f-a23b-4669-8293-d922beee92e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-176020] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6af44fc8-8922-4a41-847c-1cb9a340924b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18222"}}
	{"specversion":"1.0","id":"253959c0-919a-40b1-8c18-b27cd86cd38a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"98b3e707-f306-449c-aa6f-3fd30aa07e68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig"}}
	{"specversion":"1.0","id":"4f577a9d-4e13-418c-9cf6-2140f1b29bd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube"}}
	{"specversion":"1.0","id":"04dca16e-f117-43c5-96a1-ec55f7fcb0cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"15a98ea8-7b4e-4de2-b939-4704d25a704f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c51b043d-0627-43e8-8b81-2ca5eef793a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-176020" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-176020
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-258273 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-258273 --network=: (41.199703725s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-258273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-258273
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-258273: (2.030081132s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.25s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-000840 --network=bridge
E0226 12:02:17.836569  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:02:17.841800  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:02:17.852192  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:02:17.873447  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:02:17.914045  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:02:17.994736  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:02:18.155564  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:02:18.476513  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:02:19.117767  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:02:20.398042  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:02:22.959011  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:02:25.458301  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 12:02:28.079808  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:02:33.167803  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-000840 --network=bridge: (30.997644187s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-000840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-000840
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-000840: (1.917628695s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.93s)

                                                
                                    
x
+
TestKicExistingNetwork (38.95s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-579555 --network=existing-network
E0226 12:02:38.320781  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:02:58.801836  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-579555 --network=existing-network: (36.802078239s)
helpers_test.go:175: Cleaning up "existing-network-579555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-579555
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-579555: (1.987146517s)
--- PASS: TestKicExistingNetwork (38.95s)

                                                
                                    
x
+
TestKicCustomSubnet (32.92s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-460065 --subnet=192.168.60.0/24
E0226 12:03:39.762271  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-460065 --subnet=192.168.60.0/24: (30.851605359s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-460065 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-460065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-460065
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-460065: (2.048659848s)
--- PASS: TestKicCustomSubnet (32.92s)

                                                
                                    
x
+
TestKicStaticIP (36.32s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-963224 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-963224 --static-ip=192.168.200.200: (34.081270587s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-963224 ip
helpers_test.go:175: Cleaning up "static-ip-963224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-963224
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-963224: (2.087310253s)
--- PASS: TestKicStaticIP (36.32s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (72.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-461793 --driver=docker  --container-runtime=crio
E0226 12:04:41.615816  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-461793 --driver=docker  --container-runtime=crio: (32.496976689s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-464303 --driver=docker  --container-runtime=crio
E0226 12:05:01.682999  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:05:09.304786  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-464303 --driver=docker  --container-runtime=crio: (34.475410692s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-461793
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-464303
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-464303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-464303
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-464303: (2.003704925s)
helpers_test.go:175: Cleaning up "first-461793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-461793
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-461793: (2.276318592s)
--- PASS: TestMinikubeProfile (72.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-552048 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-552048 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.888185085s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-552048 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-566053 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-566053 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.430255515s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-566053 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-552048 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-552048 --alsologtostderr -v=5: (1.637472429s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-566053 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-566053
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-566053: (1.207199261s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.21s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-566053
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-566053: (7.210215262s)
--- PASS: TestMountStart/serial/RestartStopped (8.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-566053 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (83.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-498943 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0226 12:07:17.831240  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-498943 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m22.480169595s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (83.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498943 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498943 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-498943 -- rollout status deployment/busybox: (2.662860592s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498943 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498943 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498943 -- exec busybox-5b5d89c9d6-kfvhj -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498943 -- exec busybox-5b5d89c9d6-kjfnx -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498943 -- exec busybox-5b5d89c9d6-kfvhj -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498943 -- exec busybox-5b5d89c9d6-kjfnx -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498943 -- exec busybox-5b5d89c9d6-kfvhj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498943 -- exec busybox-5b5d89c9d6-kjfnx -- nslookup kubernetes.default.svc.cluster.local
E0226 12:07:33.167847  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498943 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498943 -- exec busybox-5b5d89c9d6-kfvhj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498943 -- exec busybox-5b5d89c9d6-kfvhj -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498943 -- exec busybox-5b5d89c9d6-kjfnx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-498943 -- exec busybox-5b5d89c9d6-kjfnx -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-498943 -v 3 --alsologtostderr
E0226 12:07:45.523620  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-498943 -v 3 --alsologtostderr: (22.492906296s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.19s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-498943 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 cp testdata/cp-test.txt multinode-498943:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 cp multinode-498943:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3690212741/001/cp-test_multinode-498943.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 cp multinode-498943:/home/docker/cp-test.txt multinode-498943-m02:/home/docker/cp-test_multinode-498943_multinode-498943-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943-m02 "sudo cat /home/docker/cp-test_multinode-498943_multinode-498943-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 cp multinode-498943:/home/docker/cp-test.txt multinode-498943-m03:/home/docker/cp-test_multinode-498943_multinode-498943-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943-m03 "sudo cat /home/docker/cp-test_multinode-498943_multinode-498943-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 cp testdata/cp-test.txt multinode-498943-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 cp multinode-498943-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3690212741/001/cp-test_multinode-498943-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 cp multinode-498943-m02:/home/docker/cp-test.txt multinode-498943:/home/docker/cp-test_multinode-498943-m02_multinode-498943.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943 "sudo cat /home/docker/cp-test_multinode-498943-m02_multinode-498943.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 cp multinode-498943-m02:/home/docker/cp-test.txt multinode-498943-m03:/home/docker/cp-test_multinode-498943-m02_multinode-498943-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943-m03 "sudo cat /home/docker/cp-test_multinode-498943-m02_multinode-498943-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 cp testdata/cp-test.txt multinode-498943-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 cp multinode-498943-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3690212741/001/cp-test_multinode-498943-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 cp multinode-498943-m03:/home/docker/cp-test.txt multinode-498943:/home/docker/cp-test_multinode-498943-m03_multinode-498943.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943 "sudo cat /home/docker/cp-test_multinode-498943-m03_multinode-498943.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 cp multinode-498943-m03:/home/docker/cp-test.txt multinode-498943-m02:/home/docker/cp-test_multinode-498943-m03_multinode-498943-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 ssh -n multinode-498943-m02 "sudo cat /home/docker/cp-test_multinode-498943-m03_multinode-498943-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-498943 node stop m03: (1.222188757s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-498943 status: exit status 7 (579.276953ms)

                                                
                                                
-- stdout --
	multinode-498943
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-498943-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-498943-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-498943 status --alsologtostderr: exit status 7 (577.763394ms)

                                                
                                                
-- stdout --
	multinode-498943
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-498943-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-498943-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 12:08:10.863656  688229 out.go:291] Setting OutFile to fd 1 ...
	I0226 12:08:10.865852  688229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 12:08:10.865875  688229 out.go:304] Setting ErrFile to fd 2...
	I0226 12:08:10.865882  688229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 12:08:10.866329  688229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
	I0226 12:08:10.867085  688229 out.go:298] Setting JSON to false
	I0226 12:08:10.867140  688229 mustload.go:65] Loading cluster: multinode-498943
	I0226 12:08:10.869114  688229 config.go:182] Loaded profile config "multinode-498943": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 12:08:10.869154  688229 status.go:255] checking status of multinode-498943 ...
	I0226 12:08:10.870798  688229 notify.go:220] Checking for updates...
	I0226 12:08:10.871829  688229 cli_runner.go:164] Run: docker container inspect multinode-498943 --format={{.State.Status}}
	I0226 12:08:10.900149  688229 status.go:330] multinode-498943 host status = "Running" (err=<nil>)
	I0226 12:08:10.900195  688229 host.go:66] Checking if "multinode-498943" exists ...
	I0226 12:08:10.900818  688229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-498943
	I0226 12:08:10.918366  688229 host.go:66] Checking if "multinode-498943" exists ...
	I0226 12:08:10.918701  688229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 12:08:10.918748  688229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-498943
	I0226 12:08:10.941702  688229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36876 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/multinode-498943/id_rsa Username:docker}
	I0226 12:08:11.041951  688229 ssh_runner.go:195] Run: systemctl --version
	I0226 12:08:11.046359  688229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 12:08:11.058499  688229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 12:08:11.130396  688229 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:66 SystemTime:2024-02-26 12:08:11.119289917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 12:08:11.131055  688229 kubeconfig.go:92] found "multinode-498943" server: "https://192.168.58.2:8443"
	I0226 12:08:11.131085  688229 api_server.go:166] Checking apiserver status ...
	I0226 12:08:11.131132  688229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0226 12:08:11.143209  688229 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1282/cgroup
	I0226 12:08:11.152948  688229 api_server.go:182] apiserver freezer: "12:freezer:/docker/665d72d8c6f5a9cc66402befc88307d7558f5b73f98f36ba44965a4af9171eed/crio/crio-bb677ed17ac3d45035fc01a010506c7e6f1cc47de1839b78fa445f418e2eeb85"
	I0226 12:08:11.153028  688229 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/665d72d8c6f5a9cc66402befc88307d7558f5b73f98f36ba44965a4af9171eed/crio/crio-bb677ed17ac3d45035fc01a010506c7e6f1cc47de1839b78fa445f418e2eeb85/freezer.state
	I0226 12:08:11.162136  688229 api_server.go:204] freezer state: "THAWED"
	I0226 12:08:11.162166  688229 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0226 12:08:11.170488  688229 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0226 12:08:11.170517  688229 status.go:421] multinode-498943 apiserver status = Running (err=<nil>)
	I0226 12:08:11.170529  688229 status.go:257] multinode-498943 status: &{Name:multinode-498943 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0226 12:08:11.170556  688229 status.go:255] checking status of multinode-498943-m02 ...
	I0226 12:08:11.170896  688229 cli_runner.go:164] Run: docker container inspect multinode-498943-m02 --format={{.State.Status}}
	I0226 12:08:11.186872  688229 status.go:330] multinode-498943-m02 host status = "Running" (err=<nil>)
	I0226 12:08:11.186899  688229 host.go:66] Checking if "multinode-498943-m02" exists ...
	I0226 12:08:11.187210  688229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-498943-m02
	I0226 12:08:11.203332  688229 host.go:66] Checking if "multinode-498943-m02" exists ...
	I0226 12:08:11.203732  688229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0226 12:08:11.203785  688229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-498943-m02
	I0226 12:08:11.222405  688229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36881 SSHKeyPath:/home/jenkins/minikube-integration/18222-608626/.minikube/machines/multinode-498943-m02/id_rsa Username:docker}
	I0226 12:08:11.321926  688229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0226 12:08:11.335085  688229 status.go:257] multinode-498943-m02 status: &{Name:multinode-498943-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0226 12:08:11.335174  688229 status.go:255] checking status of multinode-498943-m03 ...
	I0226 12:08:11.335519  688229 cli_runner.go:164] Run: docker container inspect multinode-498943-m03 --format={{.State.Status}}
	I0226 12:08:11.354591  688229 status.go:330] multinode-498943-m03 host status = "Stopped" (err=<nil>)
	I0226 12:08:11.354616  688229 status.go:343] host is not running, skipping remaining checks
	I0226 12:08:11.354623  688229 status.go:257] multinode-498943-m03 status: &{Name:multinode-498943-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-498943 node start m03 --alsologtostderr: (11.836727231s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.65s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (122.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-498943
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-498943
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-498943: (24.854576549s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-498943 --wait=true -v=8 --alsologtostderr
E0226 12:08:56.214743  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 12:09:41.615191  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-498943 --wait=true -v=8 --alsologtostderr: (1m37.372685705s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-498943
--- PASS: TestMultiNode/serial/RestartKeepsNodes (122.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-498943 node delete m03: (4.369085657s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-498943 stop: (23.735123584s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-498943 status: exit status 7 (94.682987ms)

                                                
                                                
-- stdout --
	multinode-498943
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-498943-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-498943 status --alsologtostderr: exit status 7 (97.135458ms)

                                                
                                                
-- stdout --
	multinode-498943
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-498943-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 12:10:55.440615  696391 out.go:291] Setting OutFile to fd 1 ...
	I0226 12:10:55.440791  696391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 12:10:55.440803  696391 out.go:304] Setting ErrFile to fd 2...
	I0226 12:10:55.440809  696391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 12:10:55.441060  696391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
	I0226 12:10:55.441242  696391 out.go:298] Setting JSON to false
	I0226 12:10:55.441281  696391 mustload.go:65] Loading cluster: multinode-498943
	I0226 12:10:55.441384  696391 notify.go:220] Checking for updates...
	I0226 12:10:55.441700  696391 config.go:182] Loaded profile config "multinode-498943": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 12:10:55.441711  696391 status.go:255] checking status of multinode-498943 ...
	I0226 12:10:55.442234  696391 cli_runner.go:164] Run: docker container inspect multinode-498943 --format={{.State.Status}}
	I0226 12:10:55.460700  696391 status.go:330] multinode-498943 host status = "Stopped" (err=<nil>)
	I0226 12:10:55.460725  696391 status.go:343] host is not running, skipping remaining checks
	I0226 12:10:55.460732  696391 status.go:257] multinode-498943 status: &{Name:multinode-498943 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0226 12:10:55.460773  696391 status.go:255] checking status of multinode-498943-m02 ...
	I0226 12:10:55.461091  696391 cli_runner.go:164] Run: docker container inspect multinode-498943-m02 --format={{.State.Status}}
	I0226 12:10:55.478444  696391 status.go:330] multinode-498943-m02 host status = "Stopped" (err=<nil>)
	I0226 12:10:55.478470  696391 status.go:343] host is not running, skipping remaining checks
	I0226 12:10:55.478478  696391 status.go:257] multinode-498943-m02 status: &{Name:multinode-498943-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (76.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-498943 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-498943 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m15.47504498s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-498943 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (76.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-498943
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-498943-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-498943-m02 --driver=docker  --container-runtime=crio: exit status 14 (88.070721ms)

                                                
                                                
-- stdout --
	* [multinode-498943-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-498943-m02' is duplicated with machine name 'multinode-498943-m02' in profile 'multinode-498943'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-498943-m03 --driver=docker  --container-runtime=crio
E0226 12:12:17.831485  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:12:33.168146  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-498943-m03 --driver=docker  --container-runtime=crio: (33.475744806s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-498943
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-498943: exit status 80 (353.751557ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-498943
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-498943-m03 already exists in multinode-498943-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-498943-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-498943-m03: (2.087171361s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.07s)

                                                
                                    
x
+
TestPreload (141.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-727281 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-727281 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m20.149486128s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-727281 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-727281 image pull gcr.io/k8s-minikube/busybox: (1.794023155s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-727281
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-727281: (5.776808655s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-727281 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0226 12:14:41.616056  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-727281 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (51.452809023s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-727281 image list
helpers_test.go:175: Cleaning up "test-preload-727281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-727281
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-727281: (2.359146007s)
--- PASS: TestPreload (141.79s)

                                                
                                    
x
+
TestScheduledStopUnix (108.36s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-993356 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-993356 --memory=2048 --driver=docker  --container-runtime=crio: (31.676055437s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-993356 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-993356 -n scheduled-stop-993356
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-993356 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-993356 --cancel-scheduled
E0226 12:16:04.665419  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-993356 -n scheduled-stop-993356
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-993356
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-993356 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-993356
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-993356: exit status 7 (79.616506ms)

                                                
                                                
-- stdout --
	scheduled-stop-993356
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-993356 -n scheduled-stop-993356
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-993356 -n scheduled-stop-993356: exit status 7 (82.987394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-993356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-993356
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-993356: (4.982877977s)
--- PASS: TestScheduledStopUnix (108.36s)

                                                
                                    
x
+
TestInsufficientStorage (13.28s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-427294 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-427294 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.789435229s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2ed5476a-9bae-4b38-8a28-0b575d3e5fce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-427294] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a3b0eeaa-dd64-4c17-b4ba-3d784180e5dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18222"}}
	{"specversion":"1.0","id":"5a5e1092-74bc-4cd8-a481-b137ac994a96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2cc3f8ee-6c3e-43a8-b8d4-ccdd4596c514","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig"}}
	{"specversion":"1.0","id":"90e25973-604a-4dac-9548-a67a6af7f395","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube"}}
	{"specversion":"1.0","id":"71b8383e-9647-42b0-b1ec-68df690b833f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9309453c-260f-4876-8ee0-20a7dc9efeb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3f763ee1-d1eb-498e-8315-edb388957824","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"89deab8c-aed4-4a3d-8fd8-870a8862cbbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"71a572a0-9a56-4c75-bfc9-2c1b9e970a20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf6d34ae-384a-49ba-afb5-f66d9fd279e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e79e4b00-6cc3-4318-8c29-37423b42186f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-427294 in cluster insufficient-storage-427294","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b05799c4-2fcf-4b66-8d41-ff8396cbf1aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708008208-17936 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e5704509-4298-430b-bbd3-cf642c0d0bce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"afb00053-c5fe-4146-a33c-74f03a4101ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-427294 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-427294 --output=json --layout=cluster: exit status 7 (301.044353ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-427294","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-427294","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0226 12:17:13.210004  712776 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-427294" does not appear in /home/jenkins/minikube-integration/18222-608626/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-427294 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-427294 --output=json --layout=cluster: exit status 7 (287.589194ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-427294","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-427294","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0226 12:17:13.501176  712830 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-427294" does not appear in /home/jenkins/minikube-integration/18222-608626/kubeconfig
	E0226 12:17:13.511605  712830 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/insufficient-storage-427294/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-427294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-427294
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-427294: (1.905764985s)
--- PASS: TestInsufficientStorage (13.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (108.72s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4276061932 start -p running-upgrade-462105 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4276061932 start -p running-upgrade-462105 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.252648132s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-462105 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0226 12:22:17.831734  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:22:33.167308  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-462105 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.560745357s)
helpers_test.go:175: Cleaning up "running-upgrade-462105" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-462105
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-462105: (2.814194716s)
--- PASS: TestRunningBinaryUpgrade (108.72s)

                                                
                                    
x
+
TestKubernetesUpgrade (386.22s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-647247 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-647247 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.247617172s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-647247
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-647247: (1.931861289s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-647247 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-647247 status --format={{.Host}}: exit status 7 (117.229869ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-647247 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-647247 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m43.796690272s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-647247 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-647247 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-647247 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (97.778467ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-647247] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-647247
	    minikube start -p kubernetes-upgrade-647247 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6472472 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-647247 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-647247 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-647247 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.692961304s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-647247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-647247
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-647247: (2.226363641s)
--- PASS: TestKubernetesUpgrade (386.22s)

                                                
                                    
x
+
TestMissingContainerUpgrade (145.29s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1609670727 start -p missing-upgrade-269815 --memory=2200 --driver=docker  --container-runtime=crio
E0226 12:17:17.831125  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:17:33.167201  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1609670727 start -p missing-upgrade-269815 --memory=2200 --driver=docker  --container-runtime=crio: (1m8.701024203s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-269815
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-269815: (10.384743631s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-269815
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-269815 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-269815 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m2.820666167s)
helpers_test.go:175: Cleaning up "missing-upgrade-269815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-269815
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-269815: (2.153649138s)
--- PASS: TestMissingContainerUpgrade (145.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-737289 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-737289 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (79.397862ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-737289] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-737289 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-737289 --driver=docker  --container-runtime=crio: (40.306544688s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-737289 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-737289 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-737289 --no-kubernetes --driver=docker  --container-runtime=crio: (26.50258571s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-737289 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-737289 status -o json: exit status 2 (497.062055ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-737289","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-737289
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-737289: (2.281005598s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-737289 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-737289 --no-kubernetes --driver=docker  --container-runtime=crio: (6.060905678s)
--- PASS: TestNoKubernetes/serial/Start (6.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-737289 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-737289 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.953932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (5.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (3.167309671s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (2.225951038s)
--- PASS: TestNoKubernetes/serial/ProfileList (5.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-737289
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-737289: (1.233421114s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-737289 --driver=docker  --container-runtime=crio
E0226 12:18:40.883861  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-737289 --driver=docker  --container-runtime=crio: (6.926570031s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-737289 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-737289 "sudo systemctl is-active --quiet service kubelet": exit status 1 (288.378657ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0226 12:19:41.615344  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
--- PASS: TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (77.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1415033361 start -p stopped-upgrade-535150 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1415033361 start -p stopped-upgrade-535150 --memory=2200 --vm-driver=docker  --container-runtime=crio: (46.099565564s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1415033361 -p stopped-upgrade-535150 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1415033361 -p stopped-upgrade-535150 stop: (2.595297152s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-535150 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-535150 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.601088304s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (77.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-535150
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-535150: (1.243513225s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                    
x
+
TestPause/serial/Start (58.71s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-534129 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-534129 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (58.713788366s)
--- PASS: TestPause/serial/Start (58.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-618329 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-618329 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (272.169682ms)

                                                
                                                
-- stdout --
	* [false-618329] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18222
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0226 12:25:58.381487  752219 out.go:291] Setting OutFile to fd 1 ...
	I0226 12:25:58.381599  752219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 12:25:58.381604  752219 out.go:304] Setting ErrFile to fd 2...
	I0226 12:25:58.381609  752219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0226 12:25:58.381896  752219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18222-608626/.minikube/bin
	I0226 12:25:58.382292  752219 out.go:298] Setting JSON to false
	I0226 12:25:58.383114  752219 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":90505,"bootTime":1708859854,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0226 12:25:58.383182  752219 start.go:139] virtualization:  
	I0226 12:25:58.388588  752219 out.go:177] * [false-618329] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0226 12:25:58.396984  752219 out.go:177]   - MINIKUBE_LOCATION=18222
	I0226 12:25:58.397060  752219 notify.go:220] Checking for updates...
	I0226 12:25:58.400215  752219 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0226 12:25:58.402548  752219 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18222-608626/kubeconfig
	I0226 12:25:58.407190  752219 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18222-608626/.minikube
	I0226 12:25:58.409533  752219 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0226 12:25:58.412122  752219 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0226 12:25:58.415124  752219 config.go:182] Loaded profile config "force-systemd-env-763409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0226 12:25:58.415236  752219 driver.go:392] Setting default libvirt URI to qemu:///system
	I0226 12:25:58.449280  752219 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0226 12:25:58.449392  752219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0226 12:25:58.558269  752219 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:59 SystemTime:2024-02-26 12:25:58.548123395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0226 12:25:58.558379  752219 docker.go:295] overlay module found
	I0226 12:25:58.565319  752219 out.go:177] * Using the docker driver based on user configuration
	I0226 12:25:58.567586  752219 start.go:299] selected driver: docker
	I0226 12:25:58.567606  752219 start.go:903] validating driver "docker" against <nil>
	I0226 12:25:58.567620  752219 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0226 12:25:58.570228  752219 out.go:177] 
	W0226 12:25:58.572226  752219 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0226 12:25:58.574493  752219 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-618329 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-618329

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-618329

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-618329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-618329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-618329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-618329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-618329

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-618329

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-618329

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-618329

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-618329

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-618329" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-618329" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-618329

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-618329"

                                                
                                                
----------------------- debugLogs end: false-618329 [took: 5.734199496s] --------------------------------
helpers_test.go:175: Cleaning up "false-618329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-618329
--- PASS: TestNetworkPlugins/group/false (6.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (120.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-996331 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0226 12:27:17.831161  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:27:33.168036  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-996331 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m0.37273573s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (120.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-996331 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f8add23d-052f-4d4c-a3b9-2053207c3ee8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f8add23d-052f-4d4c-a3b9-2053207c3ee8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.002883636s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-996331 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-996331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-996331 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-996331 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-996331 --alsologtostderr -v=3: (11.971178062s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-996331 -n old-k8s-version-996331
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-996331 -n old-k8s-version-996331: exit status 7 (78.67671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-996331 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (440.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-996331 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0226 12:29:41.616216  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-996331 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m20.183690079s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-996331 -n old-k8s-version-996331
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (440.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-242520 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-242520 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m4.129090111s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-242520 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6d454970-fc8f-48a6-b457-b6349bb47459] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6d454970-fc8f-48a6-b457-b6349bb47459] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003609654s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-242520 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-242520 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-242520 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-242520 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-242520 --alsologtostderr -v=3: (12.031317081s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-242520 -n no-preload-242520
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-242520 -n no-preload-242520: exit status 7 (78.694972ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-242520 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (617.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-242520 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0226 12:32:17.831238  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:32:33.167308  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 12:32:44.665884  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 12:34:41.616050  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 12:35:20.884101  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-242520 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (10m17.420189545s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-242520 -n no-preload-242520
E0226 12:41:59.020814  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (617.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-nflb5" [f0543d9d-6c46-48df-9287-277158f980e0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003408037s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-nflb5" [f0543d9d-6c46-48df-9287-277158f980e0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004299376s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-996331 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-996331 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-996331 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-996331 -n old-k8s-version-996331
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-996331 -n old-k8s-version-996331: exit status 2 (374.914596ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-996331 -n old-k8s-version-996331
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-996331 -n old-k8s-version-996331: exit status 2 (358.398532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-996331 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-996331 -n old-k8s-version-996331
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-996331 -n old-k8s-version-996331
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-500250 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0226 12:37:17.831664  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:37:33.167703  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-500250 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (55.523975047s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-500250 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cd1a8ef9-a8cb-45a0-8e11-34b20038305c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cd1a8ef9-a8cb-45a0-8e11-34b20038305c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004042056s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-500250 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-500250 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-500250 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.066270831s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-500250 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-500250 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-500250 --alsologtostderr -v=3: (12.000214983s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-500250 -n embed-certs-500250
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-500250 -n embed-certs-500250: exit status 7 (80.441036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-500250 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (348.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-500250 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0226 12:39:15.177500  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:39:15.182869  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:39:15.193198  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:39:15.213462  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:39:15.253729  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:39:15.334006  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:39:15.494468  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:39:15.815074  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:39:16.455451  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:39:17.736555  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:39:20.296784  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:39:25.417200  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:39:35.657705  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:39:41.615246  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 12:39:56.138510  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:40:37.099523  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-500250 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m48.133667511s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-500250 -n embed-certs-500250
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (348.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ncqnn" [a9a29eb6-babb-4490-962d-098c1673dd63] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004361627s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ncqnn" [a9a29eb6-babb-4490-962d-098c1673dd63] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004201868s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-242520 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-242520 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-242520 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-242520 -n no-preload-242520
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-242520 -n no-preload-242520: exit status 2 (357.284523ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-242520 -n no-preload-242520
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-242520 -n no-preload-242520: exit status 2 (380.82424ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-242520 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-242520 -n no-preload-242520
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-242520 -n no-preload-242520
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-709228 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0226 12:42:17.831766  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:42:33.167320  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-709228 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (49.67115907s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-709228 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d75c4a1a-e472-4133-966d-98e29f74f8c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d75c4a1a-e472-4133-966d-98e29f74f8c4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003548913s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-709228 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-709228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-709228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.103995187s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-709228 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-709228 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-709228 --alsologtostderr -v=3: (11.991566296s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-709228 -n default-k8s-diff-port-709228
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-709228 -n default-k8s-diff-port-709228: exit status 7 (76.783175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-709228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (611.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-709228 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0226 12:44:15.177676  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-709228 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m11.346887388s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-709228 -n default-k8s-diff-port-709228
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (611.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lsbkw" [04e4cdb5-a18b-45eb-9d3c-ee040b06da96] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lsbkw" [04e4cdb5-a18b-45eb-9d3c-ee040b06da96] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004257221s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lsbkw" [04e4cdb5-a18b-45eb-9d3c-ee040b06da96] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003563648s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-500250 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-500250 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-500250 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-500250 -n embed-certs-500250
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-500250 -n embed-certs-500250: exit status 2 (353.71989ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-500250 -n embed-certs-500250
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-500250 -n embed-certs-500250: exit status 2 (368.461388ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-500250 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-500250 -n embed-certs-500250
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-500250 -n embed-certs-500250
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-372610 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-372610 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (43.797370051s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-372610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-372610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.330946396s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-372610 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-372610 --alsologtostderr -v=3: (1.915024655s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-372610 -n newest-cni-372610
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-372610 -n newest-cni-372610: exit status 7 (80.927065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-372610 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-372610 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-372610 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (32.551365097s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-372610 -n newest-cni-372610
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-372610 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-372610 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-372610 -n newest-cni-372610
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-372610 -n newest-cni-372610: exit status 2 (360.6968ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-372610 -n newest-cni-372610
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-372610 -n newest-cni-372610: exit status 2 (332.790793ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-372610 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-372610 -n newest-cni-372610
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-372610 -n newest-cni-372610
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (49.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-618329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0226 12:46:19.769696  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
E0226 12:46:19.774952  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
E0226 12:46:19.785234  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
E0226 12:46:19.805814  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
E0226 12:46:19.846731  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
E0226 12:46:19.927457  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
E0226 12:46:20.087845  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
E0226 12:46:20.408952  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
E0226 12:46:21.049895  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
E0226 12:46:22.330644  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
E0226 12:46:24.891131  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
E0226 12:46:30.012337  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
E0226 12:46:40.253044  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-618329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (49.280285282s)
--- PASS: TestNetworkPlugins/group/auto/Start (49.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-618329 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-618329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-j2p5p" [aa8ab02a-c49f-4b41-b9c0-027fdf4390f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0226 12:47:00.733825  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-j2p5p" [aa8ab02a-c49f-4b41-b9c0-027fdf4390f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00384888s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-618329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-618329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-618329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-618329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0226 12:47:33.167452  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/addons-006797/client.crt: no such file or directory
E0226 12:47:41.694851  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-618329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (52.196597564s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-l9sdg" [e02dcc6b-d720-4d84-9d57-10c3ae832511] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006528482s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-618329 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-618329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-chbst" [8bee4cea-f263-461f-a1c8-76317e2b8991] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-chbst" [8bee4cea-f263-461f-a1c8-76317e2b8991] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004008907s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-618329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-618329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-618329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-618329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0226 12:49:15.177154  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:49:24.666679  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 12:49:41.616466  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-618329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m14.105204929s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2wnkz" [a0867f63-a670-4db3-9baa-30adb193a1d9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00601755s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-618329 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-618329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-grnkr" [908bdb82-f7e5-4ab0-8780-c21c8301e195] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-grnkr" [908bdb82-f7e5-4ab0-8780-c21c8301e195] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005330641s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-618329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-618329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-618329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-618329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0226 12:51:19.769475  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
E0226 12:51:47.455515  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/no-preload-242520/client.crt: no such file or directory
E0226 12:51:59.324624  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/auto-618329/client.crt: no such file or directory
E0226 12:51:59.330213  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/auto-618329/client.crt: no such file or directory
E0226 12:51:59.340558  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/auto-618329/client.crt: no such file or directory
E0226 12:51:59.360884  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/auto-618329/client.crt: no such file or directory
E0226 12:51:59.401082  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/auto-618329/client.crt: no such file or directory
E0226 12:51:59.481393  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/auto-618329/client.crt: no such file or directory
E0226 12:51:59.642079  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/auto-618329/client.crt: no such file or directory
E0226 12:51:59.962728  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/auto-618329/client.crt: no such file or directory
E0226 12:52:00.603261  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/auto-618329/client.crt: no such file or directory
E0226 12:52:00.884710  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/ingress-addon-legacy-329029/client.crt: no such file or directory
E0226 12:52:01.883490  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/auto-618329/client.crt: no such file or directory
E0226 12:52:04.444064  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/auto-618329/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-618329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m4.61999219s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-618329 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-618329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5jrp4" [4d7c6bf9-eb53-456e-8767-9dbceb1d9dce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0226 12:52:09.564261  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/auto-618329/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-5jrp4" [4d7c6bf9-eb53-456e-8767-9dbceb1d9dce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.0040833s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-618329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-618329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-618329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (45.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-618329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0226 12:52:40.285726  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/auto-618329/client.crt: no such file or directory
E0226 12:53:21.246705  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/auto-618329/client.crt: no such file or directory
E0226 12:53:24.087354  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory
E0226 12:53:24.092642  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory
E0226 12:53:24.102895  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory
E0226 12:53:24.123275  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory
E0226 12:53:24.163766  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory
E0226 12:53:24.244175  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory
E0226 12:53:24.404959  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-618329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (45.376911224s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (45.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-618329 "pgrep -a kubelet"
E0226 12:53:24.725971  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-618329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dj7j6" [dc1a5667-ed9c-4acb-8913-9c2c9af13881] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0226 12:53:25.367030  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory
E0226 12:53:26.648163  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory
E0226 12:53:29.209121  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-dj7j6" [dc1a5667-ed9c-4acb-8913-9c2c9af13881] Running
E0226 12:53:34.329758  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004099242s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-618329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-618329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-618329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-t2pfh" [5f06f6ff-f8dd-42f2-b7d4-65617f78ecc2] Running
E0226 12:53:44.570464  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019935443s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-t2pfh" [5f06f6ff-f8dd-42f2-b7d4-65617f78ecc2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003530156s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-709228 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-709228 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-709228 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-709228 --alsologtostderr -v=1: (1.204580061s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-709228 -n default-k8s-diff-port-709228
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-709228 -n default-k8s-diff-port-709228: exit status 2 (467.426036ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-709228 -n default-k8s-diff-port-709228
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-709228 -n default-k8s-diff-port-709228: exit status 2 (422.089046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-709228 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-709228 --alsologtostderr -v=1: (1.038921798s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-709228 -n default-k8s-diff-port-709228
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-709228 -n default-k8s-diff-port-709228
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.71s)
E0226 12:56:02.041660  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/calico-618329/client.crt: no such file or directory
E0226 12:56:07.932211  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-618329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-618329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m14.903008361s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (97.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-618329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0226 12:54:05.050846  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory
E0226 12:54:15.176931  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/old-k8s-version-996331/client.crt: no such file or directory
E0226 12:54:41.615476  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/functional-395953/client.crt: no such file or directory
E0226 12:54:43.167042  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/auto-618329/client.crt: no such file or directory
E0226 12:54:46.011214  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/kindnet-618329/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-618329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m37.808138632s)
--- PASS: TestNetworkPlugins/group/bridge/Start (97.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9hs4b" [b7ba7f09-345e-4b4e-8b4c-f1c25b573208] Running
E0226 12:55:21.073080  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/calico-618329/client.crt: no such file or directory
E0226 12:55:21.079012  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/calico-618329/client.crt: no such file or directory
E0226 12:55:21.089364  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/calico-618329/client.crt: no such file or directory
E0226 12:55:21.109715  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/calico-618329/client.crt: no such file or directory
E0226 12:55:21.149976  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/calico-618329/client.crt: no such file or directory
E0226 12:55:21.230377  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/calico-618329/client.crt: no such file or directory
E0226 12:55:21.390797  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/calico-618329/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004530903s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-618329 "pgrep -a kubelet"
E0226 12:55:21.711617  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/calico-618329/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-618329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fmr2w" [62dbe263-f850-40c4-acde-6ca143f58fdc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0226 12:55:22.352016  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/calico-618329/client.crt: no such file or directory
E0226 12:55:23.632410  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/calico-618329/client.crt: no such file or directory
E0226 12:55:26.193413  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/calico-618329/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-fmr2w" [62dbe263-f850-40c4-acde-6ca143f58fdc] Running
E0226 12:55:31.313836  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/calico-618329/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003358547s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-618329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-618329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-618329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-618329 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-618329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-89v8h" [34eaea47-9001-42fc-9700-2838f29560a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0226 12:55:41.560736  613988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18222-608626/.minikube/profiles/calico-618329/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-89v8h" [34eaea47-9001-42fc-9700-2838f29560a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003753371s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-618329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-618329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-618329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.29s)

                                                
                                    

Test skip (32/314)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.61s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-363091 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-363091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-363091
--- SKIP: TestDownloadOnlyKic (0.61s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-319719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-319719
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-618329 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-618329

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-618329

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-618329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-618329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-618329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-618329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-618329

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-618329

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-618329

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-618329

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-618329

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-618329" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-618329" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-618329

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-618329"

                                                
                                                
----------------------- debugLogs end: kubenet-618329 [took: 4.665965415s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-618329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-618329
--- SKIP: TestNetworkPlugins/group/kubenet (4.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-618329 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-618329" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-618329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-618329" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-618329

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-618329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-618329"

                                                
                                                
----------------------- debugLogs end: cilium-618329 [took: 5.01447548s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-618329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-618329
--- SKIP: TestNetworkPlugins/group/cilium (5.21s)

                                                
                                    
Copied to clipboard